Dec 29

On the eve of the decade I thought it might be fitting with a cautionary tale about dates and time.

An iPhone app that I was developing had to send data to a server. Among the data items was a timestamp.

As a best practice during development, the app was tested on several types of devices with different versions of the OS. It was working great. Except on one particular iPhone.

There was nothing obviously different about this device. Same hardware and same OS as on other devices that were working fine. We restarted the device and reinstalled the app several times. Still it refused to communicate properly with the server.

I’ll spare you the details of the hours of debugging that followed, and skip directly to the solution…

Sending timestamps between different systems is a common area of confusion and errors. For this app we had settled on expressing the time in the number of seconds since the Unix epoch (January 1, 1970). This value was then to be sent as a URL parameter in a HTTP GET request. This is a convenient way to transmit a timestamp between systems since most programming languages have ways to create a date + time object from this long value.

If you print out [[NSDate date] timeIntervalSince1970] you will see a 10 digit number. And that was what the server was expecting. However, if you look at the number (1262122135 as I’m writing this) you’ll notice that it wasn’t that long ago when the value went from 9 to 10 digits. In fact, this happened on the 9th of September 2001 at 01:46:40 GMT.

Upon further examination the obstinate iPhone that refused to run the app correctly, had its system clock set to early 2001. Thus the server call contained a 9 digit timestamp, instead of the expected 10 digits. This caused the failure when running the app.

The moral of the story: Never trust any data that you do not fully control. User input is a category that is so obvious that most developers always validate it. The system clock can also be set by the user and should therefore not be implicitly trusted.

written by Nick \\ tags: ,

4 Responses to “Do Not Trust The System Clock”

  1. Adrian Says:

    Wouldn’t it had been better to make the server create the timestamps at each connection? This way you centralize time information with the same clock (the server’s). I agree that you should not trust user data, but you might as well have skipped the timestamp feature altogether. I don’t know, maybe you couldn’t, so excuse my ignorance 🙂

  2. Nick Says:

    @Adrian: I completely agree with you that having a single system of reference for time is in most cases the best design. However in this rather unusual example the timestamp had to be generated on the client to meet the requirements. I can’t go into too much detail; suffice to say that it was security related.

  3. Johan Says:

    Thanks for the insight. I always marvel at the bugs you find when your system’s clock is out of sync. Try using certificates when your clock is set to Jan 1st 2001 and see what strange errors you receive 🙂

  4. Dave Says:

    Given the requirement that the timestamp be provided by the client, the mistake that you made was in assuming that the resulting value would have a length of 10 characters. It would have been much better to accept the value that the client provided, then attempt to validate it as an actual date.

Leave a Reply