Conversation
|
Looking at the implementation, I can't help but to see the similarities with the already added MicroTime Source: https://github.com/ircmaxell/RandomLib/blob/master/lib/RandomLib/Source/MicroTime.php#L67 It looks like it's doing basically the same thing, looping and adding new time data each iteration. The problem that I have with this particular implementation is that it's fairly complex (and hence difficult to really see what's going on), but doesn't have any significant entropy sources. So it's basically just throwing together a bunch of logic. Looking deeper into it, I see that the only actual entropy that enters into the In the end, it looks like a LOT of undocumented code and complex algorithms for not much benefit. And considering that there's already a source (above) based on microtime that uses a simpler and more standard gather-process-output algorithm that's pretty well documented. I'll leave this open for a little while in case anyone can see something that I missed, or give a good justification for it. Additionally, if it was accepted, the strength would need to be reduced to VERYLOW as there is no actual random entropy other then the timestamps (which is very minor)... Thanks! |
|
You're right that it burns CPU time and is microtime based. Basically, it's relying on there being an uncertain delta between each pair of microsecond measurements performed over a fixed period of time. This keeps it low quality but it should yield a little entropy each iteration. My own concern is its performance more than anything else. It's trading time (~20ms) for a large stack of deltas. I actually missed what you were doing in the Microtime source - it's a similar idea but I think the difference depends on whether more is better. |
Based off https://github.com/GeorgeArgyros/Secure-random-bytes-in-PHP