FileTime vs. Unix Timestamp: Key Differences Explained

FileTime vs. Unix Timestamp: Key Differences ExplainedUnderstanding timestamps is essential for developers, system administrators, and anyone who works with file systems, logging, or time-based data. Two common representations you’ll encounter on Windows and Unix-like systems are Windows FILETIME (commonly called FileTime) and the Unix timestamp. This article explains what each format is, how they differ, how to convert between them, and practical considerations when using or comparing these timestamps.


What is FileTime?

Windows FILETIME is a 64-bit value representing the number of 100-nanosecond intervals since January 1, 1601 (UTC). It’s the native timestamp format used by many Windows APIs for file times (creation, last access, last write) and other kernel objects.

Key facts:

  • Epoch: January 1, 1601 (UTC)
  • Unit: 100-nanosecond intervals (10^-7 seconds)
  • Size: 64 bits (signed/unsigned interpretations vary by API)
  • Typical usage: Windows API structures (e.g., FILETIME), .NET DateTime internal representation (ticks), and NTFS timestamps.

Because it counts from a much earlier epoch and uses a finer resolution (100 ns), FILETIME can represent dates much earlier and much later than typical 32-bit Unix timestamps, and it offers higher time resolution.


What is a Unix Timestamp?

The Unix timestamp (also called POSIX time or epoch time) is a count of seconds elapsed since January 1, 1970 (UTC), not counting leap seconds. It’s widely used on Unix-like systems, in many programming languages, and in internet protocols.

Key facts:

  • Epoch: January 1, 1970 (UTC)
  • Unit: Seconds (often stored as a 32-bit or 64-bit integer; fractional seconds may be added for sub-second precision)
  • Size: Commonly 32-bit historically (causing the Year 2038 problem), now typically 64-bit in modern systems
  • Typical usage: Unix/Linux file systems, POSIX APIs, many web APIs and logs.

Unix timestamps are simple, compact, and human-convenient for many everyday uses, though their coarse default resolution (1 second) may be insufficient for high-precision needs.


Binary representation and ranges

  • FILETIME: 64-bit count of 100-ns intervals since 1601. Maximum positive value for unsigned 64-bit gives a far-future date (~584,542 years AD if interpreted unsigned) — practically unconstrained for contemporary uses. As a signed 64-bit value the range is still enormous.
  • Unix 32-bit timestamp: ranges roughly from 1901-12-13 to 2038-01-19 (the Year 2038 problem).
  • Unix 64-bit timestamp: effectively practical infinity for modern use (±292 billion years when using seconds).

Resolution and precision

  • FILETIME uses 100-nanosecond ticks, allowing up to 10 million ticks per second.
  • Unix timestamps (integer seconds) provide 1-second resolution. Many systems extend Unix timestamps with fractional seconds (milliseconds, microseconds, or nanoseconds) for higher precision (e.g., time_t with fractional parts, struct timespec with nanoseconds).

Endianness and platform considerations

Both formats are numeric values stored in binary. Endianness matters when serializing or transferring raw binary structures between architectures. APIs and file formats typically define byte order; when reading raw FILETIME structures from disk or network you must honor the stored endianness.


Converting between FileTime and Unix timestamp

To convert between the two, you need to account for:

  1. Different epochs: 1601-01-01 for FILETIME vs. 1970-01-01 for Unix.
  2. Different units: 100-ns ticks vs. seconds (or fractional seconds).

The offset between the two epochs is the number of 100-ns intervals (or seconds) from 1601-01-01 to 1970-01-01.

Epoch difference:

  • Days between 1601-01-01 and 1970-01-01 = 134,774 days
  • Seconds difference = 134,774 × 86,400 = 11,644,473,600 seconds
  • FILETIME ticks difference = 11,644,473,600 × 10,000,000 = 116444736000000000 (100-ns units)

Common conversions:

  • FILETIME -> Unix seconds: unix = (filetime / 10,000,000) – 11644473600
  • Unix seconds -> FILETIME: filetime = (unix + 11644473600) × 10,000,000

For sub-second precision, keep fractional parts (milliseconds, microseconds, or direct 100-ns ticks) in the arithmetic.

Examples:

  • Convert FILETIME value 132269760000000000 (example) to Unix: unix = 132269760000000000 / 10,000,000 – 11,644,473,600 = 12,582,480 -> corresponds to a date in 1970s/1980s depending on value.
  • Convert Unix timestamp 0 (1970-01-01) to FILETIME: filetime = (0 + 11,644,473,600) × 10,000,000 = 116444736000000000

Examples in code

C# (.NET):

// FILETIME ticks are same unit as DateTime.Ticks (100-ns) and DateTime.Kind should be UTC const long FileTimeEpochDiff = 116444736000000000L; // Convert FILETIME (ulong fileTime) to Unix seconds (long) long UnixFromFileTime(ulong fileTime) {     return (long)(fileTime / 10000000UL) - 11644473600L; } // Convert Unix seconds to FILETIME (ulong) ulong FileTimeFromUnix(long unixSeconds) {     return (ulong)((unixSeconds + 11644473600L) * 10000000L); } 

Python:

EPOCH_DIFF_SECS = 11644473600 def filetime_to_unix(filetime):     # filetime in 100-ns units     return filetime / 10_000_000 - EPOCH_DIFF_SECS def unix_to_filetime(unix_seconds):     return int((unix_seconds + EPOCH_DIFF_SECS) * 10_000_000) 

PowerShell:

# Convert FILETIME (as ulong) to datetime $filetime = 132269760000000000 [datetime]::FromFileTimeUtc([long]$filetime) # Convert DateTime to FILETIME [datetime]::UtcNow.ToFileTimeUtc() 

Practical issues and pitfalls

  • Time zones: Both FILETIME and Unix timestamps represent points in time in UTC. Displaying local times requires converting to the desired time zone. Do not treat these values as local time.
  • Leap seconds: Unix time (POSIX) ignores leap seconds; FILETIME also represents linear time without leap-second adjustments. For most applications, this is acceptable, but for astronomical or high-precision timekeeping, use specialized time standards (TAI/UTC handling).
  • Serialization and interoperability: When exchanging timestamps between systems, prefer numeric values in well-documented units (e.g., Unix seconds or milliseconds) or ISO 8601 strings. If you must exchange raw FILETIME structures, document endianness and signedness.
  • Year 2038 problem: Avoid 32-bit time_t for new systems; use 64-bit representations or FILETIME where appropriate.
  • Precision mismatch: Converting from FILETIME to integer Unix seconds loses sub-second precision unless you explicitly retain fractional parts.

When to use which

  • Use FILETIME when interacting with Windows APIs, NTFS metadata, or .NET DateTime internals expecting 100-ns ticks since 1601.
  • Use Unix timestamps for cross-platform logging, web APIs, or systems that already adopt POSIX conventions.
  • For human-readable storage or APIs, use ISO 8601 strings (e.g., 2025-08-31T12:34:56Z) to avoid epoch confusion.

Quick reference table

Property FileTime (Windows) Unix Timestamp (POSIX)
Epoch 1601-01-01 UTC 1970-01-01 UTC
Unit 100-nanosecond ticks Seconds (commonly)
Typical size 64-bit 32-bit (legacy) or 64-bit (modern)
Precision 100 ns 1 s (or fractional when extended)
Use cases Windows APIs, NTFS, .NET internals Unix/Linux systems, web APIs, logs

Conclusion

FILETIME and Unix timestamps are different ways of representing instants in time: FILETIME uses a much earlier epoch and higher resolution (100 ns ticks), while Unix time uses a 1970 epoch and second-based units. Converting between them is straightforward once you account for the epoch offset (11644473600 seconds) and the unit difference (10,000,000 ticks per second). Choose the representation appropriate for your platform and interoperability needs, and prefer explicit documentation or ISO 8601 for cross-system data exchange.

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *