tailf: fixed timing issue that could cause duplicate data output
The issue is that in roll_file() we fstat() to find the file size, then read()
as much data as we can and then use the previously saved file size to mark our
position. The bug occurs if we read past the file size reported by fstat()
because more data has arrived while we were reading it. The attached patch uses
the current file position as the location marker instead, with some extra logic
to handle tailing truncated files.
[kzak@redhat.com: - fix coding style]
Signed-off-by: Dima Kogan <dkogan@cds.caltech.edu> Signed-off-by: Karel Zak <kzak@redhat.com>