if((tick1 = times(&tStruct1))== -1) printf(„Error time 1\n“);
usleep(microseconds);
if((tick2 = times(&tStruct2))== -1) printf(„Error time 2\n“);
if((dClock = sysconf(_SC_CLK_TCK))
I get the following output:
usleep(1000000) takes 1010.00 ms
usleep(100000) takes 110.00 ms
usleep(10000) takes 20.00 ms
usleep(1000) takes 20.00 ms
usleep(100) takes 20.00 ms
usleep(1) takes 20.00 ms
Finished
Please, can anybody tell me why a usleep(1 usec) takes 20 ms ?
(Linux 2.4, glibc2.2, gcc 2.95.2)
I think you are right. However, I have
a requirement to wait for exactly x/1000 seconds. I will try to
realize this with the realtime clock (/dev/rtc), but I’m wondering that there is no other way for this, like
the funktion „VOID Sleep(DWORD dwMilliseconds)“ in win32. This function seems to wait exactly for the specified mseconds.
Any Idea ?
Thanks for your response.
Lars
Hello Lars
I’m not an linux specialist but:
usleep() normaly waits not for microseconds, as you may
think, but it waits for systemticks !!!
The value of dClock gives you the amount of ticks per second (
the value of dClock for your system seems to be 50 equal 20 ms
between to ticks).