Description
This is a place to discuss whether supporting datetime logic with sub-nanosecond precision is desirable and feasible.
Motivation
I'm using pandas to interact with neuroscientific data—specifically, electrophysiology data. These data are sampled at very precise frequencies that cannot be expressed as an integer number of nanoseconds. Often the datasets span a large enough duration that rounding the sampling frequency to the nearest nanosecond would result an unacceptable accumulation of rounding errors over the duration, so it's important to represent the sampling frequency as accurately as possible. This precludes using pandas' datetime logic to represent timestamps. It would be useful to be able to leverage that logic, though, because it makes things like down/up-sampling very easy.
The attotime
library appears to offer support for arbitrary-precision timestamps with a datetime
interface, so it might be a good starting point for this. I just learned of it now though, so I'm not sure whether it's mature enough, etc.
If this is out of scope for the project, or if there's an obvious workaround or best practice that I'm missing, sorry for the noise and please let me know!