Description
While they come with a caveat that they are an "approximate" value, these constants are treated as gospel and invoked by clippy in the excessive_precision lint to create a warning. Clippy is wrong, however, in doing so. The numbers currently embedded in these constants reflect the lower bound, beyond which precision cannot be certain.
A binary32 may, however, contain up to 9 significant decimal digits, a binary64 may contain up to 17 decimal digits, and it is these numbers that are recommended as minimums for printing decimal strings if one wants to assure it will be parsed back as the actual value. In other words, there is a range. These const values have lead people to misleading conclusions, the correct answer to which cannot even be inferred from the documentation since there is no specification on how they are "approximate".
Either these constants should reflect the upper bound of precision, another constant should be embedded alongside them reflecting the other side of the bound and the documentation should discuss both ends of the range, or they should be deprecated wholesale.
EDIT: I misunderstood a segment of the code and Clippy's is a touch less problematic than I thought. Alas, precisely because of even more confusing naming from confusing names...