Sets the mtime of a file with full microsecond precision in python
Let’s say I created a test file and checked its mtime:
$ touch testfile.txt $ stat testfile.txt File: `testfile.txt' Size: 0 Blocks: 0 IO Block: 4096 regular empty file Device: fc01h/64513d Inode: 3413533 Links: 1 Access: (0664/-rw-rw-r--) Uid: ( 1000/ me) Gid: ( 1000/ me) Access: 2014-09-17 18:38:34.248965866 -0400 Modify: 2014-09-17 18:38:34.248965866 -0400 Change: 2014-09-17 18:38:34.248965866 -0400 Birth: - $ date -d '2014-09-17 18:38:34.248965866 -0400' +%s 1410993514
mtime above is listed with microsecond precision (I realize that the system clock resolution makes the higher part of that resolution a bit useless).
The utimes(2) system call allows me to pass microseconds. However,
the os.utime() function seems to combine it into a single number.
I can pass a
float:: like this
>>> os.utime('testfile.txt', (1410993514.248965866, 1410993514.248965866))
$ stat testfile.txt File: `testfile.txt' Size: 0 Blocks: 0 IO Block: 4096 regular empty file Device: fc01h/64513d Inode: 3413533 Links: 1 Access: (0664/-rw-rw-r--) Uid: ( 1000/ me) Gid: ( 1000/ me) Access: 2014-09-17 18:38:34.248965000 -0400 Modify: 2014-09-17 18:38:34.248965000 -0400 Change: 2014-09-17 18:46:07.544974140 -0400 Birth: -
Presumably the loss of precision is because the value is converted to
float and Python knows that it cannot trust the last few decimal places.
Is there a way to set the full microsecond field via python?
You have set up full microseconds. Micro means one part per million;
.248965 is 248965 microseconds.
.248965866 is 248965866 nano seconds.
Of course it’s also 248965.866 microseconds, but Python is a portable API for setting the time on each platform, but Windows only accepts integer microseconds, not decimals. (And, in fact, POSIX doesn’t require the system to remember any time smaller than microseconds.) )
As of Python 3.3,
ns on systems that support methods for setting nanoseconds Keyword arguments. 1,2 Therefore, you can pass integers for time and then nanoseconds in separate parameters. Like this:
>>> os.utime('testfile.txt', (1410993514, 1410993514), ns=(248965866, 248965866))
One last thing :
Presumably the precision is lost because the value was converted to a float and python knew better than to trust the last few decimal places.
This might actually make sense… But Python doesn’t do that. You can see the exact code it uses here, but basically, the only compensation they make for rounding is to ensure that negative microseconds become 0.3
But you’re right, rounding errors are a potential issue here… That’s why *nix and Python avoid this problem by using separate
nanoseconds integers (Windows solves it by using 64-bit int instead of double).
1 If you are on Unix, this means that you have a
utimens function, which is similar to
utimes but uses struct
timespec struct timeval
。 You should install it on any non-ancient Linux/Glibc system; On *BSD it depends on the kernel, but I think everything except OS X has it now; Otherwise you may not have it. But the easiest way to check is
2 On Windows, Python uses the native Win32 API that is processed in units of 100ns, so you only get one extra digit this way, not three digits.
3 I linked to 3.2 because 3.3 is a bit hard to understand, partly because of the
ns support you care about, but mostly because
of the at support you don’t support.