Cristian's Algorithm: Simple and Efficient Time Synchronization

Learn how Cristian's Algorithm synchronizes computer clocks with a time server, especially in low-latency networks. Understand the process of time synchronization and its importance for distributed systems.



Cristian's Algorithm: A Clock Synchronization Method

Cristian's Algorithm is used by client processes to synchronize their time with a time server. It is especially effective in low-latency networks where round trip times are short and accuracy is key. However, distributed systems that are prone to redundancy issues do not perform well with this algorithm. In this context, Round Trip Time (RTT) is defined as the time interval between the initiation of a request and the receipt of the corresponding response.

How Cristian's Algorithm Works

The process on the client machine sends a request to the clock server at time T0 to get the server's clock time. The server responds with its clock time. The client receives the response at time T1, and the synchronized client time is calculated using the formula:

Syntax

TCLIENT = TSERVER + (T1 - T0) / 2;

Where:

  • TCLIENT is the synchronized client clock time.
  • TSERVER is the time returned by the server.
  • T0 is the time the client sends the request.
  • T1 is the time the client receives the response.

This formula works well if the network latency (T0 and T1) is roughly equal. The time difference between client-side time and real time will not exceed (T1 - T0) / 2 seconds. Hence, the synchronization error can be expressed as:

Syntax

Error E = [-(T1 - T0) / 2, (T1 - T0) / 2]

Python Code Examples

Below are Python examples demonstrating how Cristian's algorithm works in practice:

Clock Server Prototype (Python)

To start a clock server on a local machine, use the following Python code:

Syntax

# Python3 program imitating a clock server  

import socket  
import datetime  

def initiateClockServer():  
    s = socket.socket()  
    print("Socket successfully created")  
    
    port = 8000  
    s.bind(('', port))  
    s.listen(5)   
    print("Socket is listening...")  
    
    while True:  
        connection, address = s.accept()      
        print('Server connected to', address)  
        connection.send(str(datetime.datetime.now()).encode())  
        connection.close()  

if __name__ == '__main__':  
    initiateClockServer()

Output

Output

Socket successfully created
Socket is listening...

Client Process Prototype (Python)

Here is the Python code for a client process that communicates with the clock server:

Syntax

# Python3 program imitating a client process  

import socket  
import datetime  
from dateutil import parser  
from timeit import default_timer as timer  

def synchronizeTime():  
    s = socket.socket()  
    port = 8000   
    s.connect(('127.0.0.1', port))  
    
    request_time = timer()  
    server_time = parser.parse(s.recv(1024).decode())  
    response_time = timer()  
    actual_time = datetime.datetime.now()  
    
    print("Time returned by server: " + str(server_time))  
    process_delay_latency = response_time - request_time  
    print("Process Delay latency: " + str(process_delay_latency) + " seconds")  
    print("Actual clock time at client side: " + str(actual_time))  
    
    client_time = server_time + datetime.timedelta(seconds = (process_delay_latency) / 2)  
    print("Synchronized process client time: " + str(client_time))  
    error = actual_time - client_time  
    print("Synchronization error : " + str(error.total_seconds()) + " seconds")  
    s.close()     

if __name__ == '__main__':  
    synchronizeTime()

Output

Output

Time returned by server: 2018-11-07 17:56:43.302379
Process Delay latency: 0.0005150819997652434 seconds
Actual clock time at client side: 2018-11-07 17:56:43.302756
Synchronized process client time: 2018-11-07 17:56:43.302637
Synchronization error : 0.000119 seconds

Improving Synchronization Accuracy

To improve synchronization accuracy, we can perform iterative tests to minimize synchronization error. By introducing a minimum transfer time (Tmin), the server time will be generated after T0 + Tmin, and the server's time will always be generated before T1 - Tmin. The synchronization error can then be formulated as:

Syntax

Error E = [-((T1 - T0)/2 - Tmin), ((T1 - T0)/2 - Tmin)]

If there is significant variation between TREQUEST and TRESPONSE times, separate time latencies, TMIN1 and TMIN2, can be used to reduce error. In this case, the synchronized time is calculated using the following formula:

Syntax

(T1 - T0)/2 + (Tmin2 - Tmin1)/2 + TSERVER = TCLIENT

Through iterative testing and by adjusting for network delays, synchronization accuracy can be improved, reducing the overall error.