c-qcam - Connectix Color QuickCam video4linux kernel driver
Copyright (C) 1999 Dave Forrest <drf5n@virginia.edu>
- released under GNU GPL.
+ released under GNU GPL.
1999-12-08 Dave Forrest, written with kernel version 2.2.12 in mind
CONFIG_PNP_PARPORT M for autoprobe.o IEEE1284 readback module
CONFIG_PRINTER_READBACK M for parport_probe.o IEEE1284 readback module
CONFIG_VIDEO_DEV M for videodev.o video4linux module
- CONFIG_VIDEO_CQCAM M for c-qcam.o Color Quickcam module
+ CONFIG_VIDEO_CQCAM M for c-qcam.o Color Quickcam module
With these flags, the kernel should compile and install the modules.
To record and monitor the compilation, I use:
(make zlilo ; \
make modules; \
- make modules_install ;
+ make modules_install ;
depmod -a ) &>log &
less log # then a capital 'F' to watch the progress
-
+
But that is my personal preference.
2.2 Configuration
-
+
The configuration requires module configuration and device
configuration. I like kmod or kerneld process with the
/etc/modprobe.conf file so the modules can automatically load/unload as
these procedures.
-2.1 Module Configuration
+2.1 Module Configuration
Using modules requires a bit of work to install and pass the
parameters. Understand that entries in /etc/modprobe.conf of:
(CONFIG_PRINTER), the IEEE 1284 system,(CONFIG_PRINTER_READBACK), you
should be able to read some identification from your quickcam with
- modprobe -v parport
- modprobe -v parport_probe
- cat /proc/parport/PORTNUMBER/autoprobe
+ modprobe -v parport
+ modprobe -v parport_probe
+ cat /proc/parport/PORTNUMBER/autoprobe
Returns:
CLASS:MEDIA;
MODEL:Color QuickCam 2.0;
and well. A common problem is that the current driver does not
reliably detect a c-qcam, even though one is attached. In this case,
- modprobe -v c-qcam
+ modprobe -v c-qcam
or
insmod -v c-qcam
3.1 Checklist:
Can you get an image?
- v4lgrab >qcam.ppm ; wc qcam.ppm ; xv qcam.ppm
+ v4lgrab >qcam.ppm ; wc qcam.ppm ; xv qcam.ppm
- Is a working c-qcam connected to the port?
- grep ^ /proc/parport/?/autoprobe
+ Is a working c-qcam connected to the port?
+ grep ^ /proc/parport/?/autoprobe
- Do the /dev/video* files exist?
- ls -lad /dev/video
+ Do the /dev/video* files exist?
+ ls -lad /dev/video
- Is the c-qcam module loaded?
- modprobe -v c-qcam ; lsmod
+ Is the c-qcam module loaded?
+ modprobe -v c-qcam ; lsmod
Does the camera work with alternate programs? cqcam, etc?
isn't, you might try patching the c-qcam module to add a parport=xxx
option as in the bw-qcam module so you can specify the parallel port:
- insmod -v c-qcam parport=0
+ insmod -v c-qcam parport=0
And bypass the detection code, see ../../drivers/char/c-qcam.c and
look for the 'qc_detect' code and call.
this work is documented at the video4linux2 site listed below.
-9.0 --- A sample program using v4lgrabber,
+9.0 --- A sample program using v4lgrabber,
This program is a simple image grabber that will copy a frame from the
first video device, /dev/video0 to standard output in portable pixmap
format (.ppm) Using this like: 'v4lgrab | convert - c-qcam.jpg'
-produced this picture of me at
+produced this picture of me at
http://mug.sys.virginia.edu/~drf5n/extras/c-qcam.jpg
-------------------- 8< ---------------- 8< -----------------------------
* Use as:
* v4lgrab >image.ppm
*
- * Copyright (C) 1998-05-03, Phil Blundell <philb@gnu.org>
- * Copied from http://www.tazenda.demon.co.uk/phil/vgrabber.c
+ * Copyright (C) 1998-05-03, Phil Blundell <philb@gnu.org>
+ * Copied from http://www.tazenda.demon.co.uk/phil/vgrabber.c
* with minor modifications (Dave Forrest, drf5n@virginia.edu).
*
*/
#define READ_VIDEO_PIXEL(buf, format, depth, r, g, b) \
{ \
- switch (format) \
- { \
- case VIDEO_PALETTE_GREY: \
- switch (depth) \
- { \
- case 4: \
- case 6: \
- case 8: \
- (r) = (g) = (b) = (*buf++ << 8);\
- break; \
- \
- case 16: \
- (r) = (g) = (b) = \
- *((unsigned short *) buf); \
- buf += 2; \
- break; \
- } \
- break; \
- \
- \
- case VIDEO_PALETTE_RGB565: \
- { \
- unsigned short tmp = *(unsigned short *)buf; \
- (r) = tmp&0xF800; \
- (g) = (tmp<<5)&0xFC00; \
- (b) = (tmp<<11)&0xF800; \
- buf += 2; \
- } \
- break; \
- \
- case VIDEO_PALETTE_RGB555: \
- (r) = (buf[0]&0xF8)<<8; \
- (g) = ((buf[0] << 5 | buf[1] >> 3)&0xF8)<<8; \
- (b) = ((buf[1] << 2 ) & 0xF8)<<8; \
- buf += 2; \
- break; \
- \
- case VIDEO_PALETTE_RGB24: \
- (r) = buf[0] << 8; (g) = buf[1] << 8; \
- (b) = buf[2] << 8; \
- buf += 3; \
- break; \
- \
- default: \
- fprintf(stderr, \
- "Format %d not yet supported\n", \
- format); \
- } \
-}
+ switch (format) \
+ { \
+ case VIDEO_PALETTE_GREY: \
+ switch (depth) \
+ { \
+ case 4: \
+ case 6: \
+ case 8: \
+ (r) = (g) = (b) = (*buf++ << 8);\
+ break; \
+ \
+ case 16: \
+ (r) = (g) = (b) = \
+ *((unsigned short *) buf); \
+ buf += 2; \
+ break; \
+ } \
+ break; \
+ \
+ \
+ case VIDEO_PALETTE_RGB565: \
+ { \
+ unsigned short tmp = *(unsigned short *)buf; \
+ (r) = tmp&0xF800; \
+ (g) = (tmp<<5)&0xFC00; \
+ (b) = (tmp<<11)&0xF800; \
+ buf += 2; \
+ } \
+ break; \
+ \
+ case VIDEO_PALETTE_RGB555: \
+ (r) = (buf[0]&0xF8)<<8; \
+ (g) = ((buf[0] << 5 | buf[1] >> 3)&0xF8)<<8; \
+ (b) = ((buf[1] << 2 ) & 0xF8)<<8; \
+ buf += 2; \
+ break; \
+ \
+ case VIDEO_PALETTE_RGB24: \
+ (r) = buf[0] << 8; (g) = buf[1] << 8; \
+ (b) = buf[2] << 8; \
+ buf += 3; \
+ break; \
+ \
+ default: \
+ fprintf(stderr, \
+ "Format %d not yet supported\n", \
+ format); \
+ } \
+}
int get_brightness_adj(unsigned char *image, long size, int *brightness) {
long i, tot = 0;
if(ioctl(fd, VIDIOCSPICT, &vpic) < 0) {
vpic.depth=6;
if(ioctl(fd, VIDIOCSPICT, &vpic) < 0) {
- vpic.depth=4;
- if(ioctl(fd, VIDIOCSPICT, &vpic) < 0) {
- fprintf(stderr, "Unable to find a supported capture format.\n");
- close(fd);
- exit(1);
- }
+ vpic.depth=4;
+ if(ioctl(fd, VIDIOCSPICT, &vpic) < 0) {
+ fprintf(stderr, "Unable to find a supported capture format.\n");
+ close(fd);
+ exit(1);
+ }
}
}
} else {
vpic.depth=24;
vpic.palette=VIDEO_PALETTE_RGB24;
-
+
if(ioctl(fd, VIDIOCSPICT, &vpic) < 0) {
vpic.palette=VIDEO_PALETTE_RGB565;
vpic.depth=16;
-
+
if(ioctl(fd, VIDIOCSPICT, &vpic)==-1) {
- vpic.palette=VIDEO_PALETTE_RGB555;
- vpic.depth=15;
-
- if(ioctl(fd, VIDIOCSPICT, &vpic)==-1) {
- fprintf(stderr, "Unable to find a supported capture format.\n");
- return -1;
- }
+ vpic.palette=VIDEO_PALETTE_RGB555;
+ vpic.depth=15;
+
+ if(ioctl(fd, VIDIOCSPICT, &vpic)==-1) {
+ fprintf(stderr, "Unable to find a supported capture format.\n");
+ return -1;
+ }
}
}
}
-
+
buffer = malloc(win.width * win.height * bpp);
if (!buffer) {
fprintf(stderr, "Out of memory.\n");
exit(1);
}
-
+
do {
int newbright;
read(fd, buffer, win.width * win.height * bpp);
if (f) {
vpic.brightness += (newbright << 8);
if(ioctl(fd, VIDIOCSPICT, &vpic)==-1) {
- perror("VIDIOSPICT");
- break;
+ perror("VIDIOSPICT");
+ break;
}
}
} while (f);
fputc(g>>8, stdout);
fputc(b>>8, stdout);
}
-
+
close(fd);
return 0;
}
* Philips saa7111 TV decoder
* Philips saa7185 TV encoder
Drivers to use: videodev, i2c-core, i2c-algo-bit,
- videocodec, saa7111, saa7185, zr36060, zr36067
+ videocodec, saa7111, saa7185, zr36060, zr36067
Inputs/outputs: Composite and S-video
Norms: PAL, SECAM (720x576 @ 25 fps), NTSC (720x480 @ 29.97 fps)
Card number: 7
* Brooktree bt819 TV decoder
* Brooktree bt856 TV encoder
Drivers to use: videodev, i2c-core, i2c-algo-bit,
- videocodec, bt819, bt856, zr36060, zr36067
+ videocodec, bt819, bt856, zr36060, zr36067
Inputs/outputs: Composite and S-video
Norms: PAL (720x576 @ 25 fps), NTSC (720x480 @ 29.97 fps)
Card number: 5
* Philips saa7114 TV decoder
* Analog Devices adv7170 TV encoder
Drivers to use: videodev, i2c-core, i2c-algo-bit,
- videocodec, saa7114, adv7170, zr36060, zr36067
+ videocodec, saa7114, adv7170, zr36060, zr36067
Inputs/outputs: Composite and S-video
Norms: PAL (720x576 @ 25 fps), NTSC (720x480 @ 29.97 fps)
Card number: 6
* Philips saa7110a TV decoder
* Analog Devices adv7176 TV encoder
Drivers to use: videodev, i2c-core, i2c-algo-bit,
- videocodec, saa7110, adv7175, zr36060, zr36067
+ videocodec, saa7110, adv7175, zr36060, zr36067
Inputs/outputs: Composite, S-video and Internal
Norms: PAL, SECAM (768x576 @ 25 fps), NTSC (640x480 @ 29.97 fps)
Card number: 1
* Micronas vpx3220a TV decoder
* mse3000 TV encoder or Analog Devices adv7176 TV encoder *
Drivers to use: videodev, i2c-core, i2c-algo-bit,
- videocodec, vpx3220, mse3000/adv7175, zr36050, zr36016, zr36067
+ videocodec, vpx3220, mse3000/adv7175, zr36050, zr36016, zr36067
Inputs/outputs: Composite, S-video and Internal
Norms: PAL, SECAM (768x576 @ 25 fps), NTSC (640x480 @ 29.97 fps)
Card number: 0
* Micronas vpx3225d/vpx3220a/vpx3216b TV decoder
* Analog Devices adv7176 TV encoder
Drivers to use: videodev, i2c-core, i2c-algo-bit,
- videocodec, vpx3220/vpx3224, adv7175, zr36050, zr36016, zr36067
+ videocodec, vpx3220/vpx3224, adv7175, zr36050, zr36016, zr36067
Inputs/outputs: Composite, S-video and Internal
Norms: PAL, SECAM (768x576 @ 25 fps), NTSC (640x480 @ 29.97 fps)
Card number: 3
The best know TV standards are NTSC/PAL/SECAM. but for decoding a frame that
information is not enough. There are several formats of the TV standards.
-And not every TV decoder is able to handle every format. Also the every
-combination is supported by the driver. There are currently 11 different
-tv broadcast formats all aver the world.
+And not every TV decoder is able to handle every format. Also the every
+combination is supported by the driver. There are currently 11 different
+tv broadcast formats all aver the world.
-The CCIR defines parameters needed for broadcasting the signal.
+The CCIR defines parameters needed for broadcasting the signal.
The CCIR has defined different standards: A,B,D,E,F,G,D,H,I,K,K1,L,M,N,...
The CCIR says not much about about the colorsystem used !!!
And talking about a colorsystem says not to much about how it is broadcast.
When you speak about NTSC, you usually mean the standard: CCIR - M using
the NTSC colorsystem which is used in the USA, Japan, Mexico, Canada
-and a few others.
+and a few others.
When you talk about PAL, you usually mean: CCIR - B/G using the PAL
-colorsystem which is used in many Countries.
+colorsystem which is used in many Countries.
-When you talk about SECAM, you mean: CCIR - L using the SECAM Colorsystem
+When you talk about SECAM, you mean: CCIR - L using the SECAM Colorsystem
which is used in France, and a few others.
There the other version of SECAM, CCIR - D/K is used in Bulgaria, China,
-Slovakai, Hungary, Korea (Rep.), Poland, Rumania and a others.
+Slovakai, Hungary, Korea (Rep.), Poland, Rumania and a others.
-The CCIR - H uses the PAL colorsystem (sometimes SECAM) and is used in
+The CCIR - H uses the PAL colorsystem (sometimes SECAM) and is used in
Egypt, Libya, Sri Lanka, Syrain Arab. Rep.
The CCIR - I uses the PAL colorsystem, and is used in Great Britain, Hong Kong,
We do not talk about how the audio is broadcast !
-A rather good sites about the TV standards are:
+A rather good sites about the TV standards are:
http://www.sony.jp/ServiceArea/Voltage_map/
http://info.electronicwerkstatt.de/bereiche/fernsehtechnik/frequenzen_und_normen/Fernsehnormen/
and http://www.cabl.com/restaurant/channel.html
Other weird things around: NTSC 4.43 is a modificated NTSC, which is mainly
used in PAL VCR's that are able to play back NTSC. PAL 60 seems to be the same
-as NTSC 4.43 . The Datasheets also talk about NTSC 44, It seems as if it would
-be the same as NTSC 4.43.
+as NTSC 4.43 . The Datasheets also talk about NTSC 44, It seems as if it would
+be the same as NTSC 4.43.
NTSC Combs seems to be a decoder mode where the decoder uses a comb filter
to split coma and luma instead of a Delay line.
But I did not defiantly find out what NTSC Comb is.
Philips saa7111 TV decoder
-was introduced in 1997, is used in the BUZ and
-can handle: PAL B/G/H/I, PAL N, PAL M, NTSC M, NTSC N, NTSC 4.43 and SECAM
+was introduced in 1997, is used in the BUZ and
+can handle: PAL B/G/H/I, PAL N, PAL M, NTSC M, NTSC N, NTSC 4.43 and SECAM
Philips saa7110a TV decoder
was introduced in 1995, is used in the Pinnacle/Miro DC10(new), DC10+ and
-can handle: PAL B/G, NTSC M and SECAM
+can handle: PAL B/G, NTSC M and SECAM
Philips saa7114 TV decoder
-was introduced in 2000, is used in the LML33R10 and
+was introduced in 2000, is used in the LML33R10 and
can handle: PAL B/G/D/H/I/N, PAL N, PAL M, NTSC M, NTSC 4.43 and SECAM
Brooktree bt819 TV decoder
can generate: PAL B/G, NTSC M
Brooktree bt856 TV Encoder
-was introduced in 1994, is used in the LML33
+was introduced in 1994, is used in the LML33
can generate: PAL B/D/G/H/I/N, PAL M, NTSC M, PAL-N (Argentina)
Analog Devices adv7170 TV Encoder
was introduced in 1991, is used in the DC10 old
can generate: PAL , NTSC , SECAM
-The adv717x, should be able to produce PAL N. But you find nothing PAL N
+The adv717x, should be able to produce PAL N. But you find nothing PAL N
specific in the registers. Seem that you have to reuse a other standard
-to generate PAL N, maybe it would work if you use the PAL M settings.
+to generate PAL N, maybe it would work if you use the PAL M settings.
==========================
VIA MVP3
Forget it. Pointless. Doesn't work.
-Intel 430FX (Pentium 200)
+Intel 430FX (Pentium 200)
LML33 perfect, Buz tolerable (3 or 4 frames dropped per movie)
Intel 440BX (early stepping)
LML33 tolerable. Buz starting to get annoying (6-10 frames/hour)
> -q 25 -b 128 : 24.655.992
> -q 25 -b 256 : 25.859.820
-I woke up, and can't go to sleep again. I'll kill some time explaining why
+I woke up, and can't go to sleep again. I'll kill some time explaining why
this doesn't look strange to me.
-Let's do some math using a width of 704 pixels. I'm not sure whether the Buz
+Let's do some math using a width of 704 pixels. I'm not sure whether the Buz
actually use that number or not, but that's not too important right now.
-704x288 pixels, one field, is 202752 pixels. Divided by 64 pixels per block;
-3168 blocks per field. Each pixel consist of two bytes; 128 bytes per block;
-1024 bits per block. 100% in the new driver mean 1:2 compression; the maximum
-output becomes 512 bits per block. Actually 510, but 512 is simpler to use
+704x288 pixels, one field, is 202752 pixels. Divided by 64 pixels per block;
+3168 blocks per field. Each pixel consist of two bytes; 128 bytes per block;
+1024 bits per block. 100% in the new driver mean 1:2 compression; the maximum
+output becomes 512 bits per block. Actually 510, but 512 is simpler to use
for calculations.
-Let's say that we specify d1q50. We thus want 256 bits per block; times 3168
-becomes 811008 bits; 101376 bytes per field. We're talking raw bits and bytes
-here, so we don't need to do any fancy corrections for bits-per-pixel or such
+Let's say that we specify d1q50. We thus want 256 bits per block; times 3168
+becomes 811008 bits; 101376 bytes per field. We're talking raw bits and bytes
+here, so we don't need to do any fancy corrections for bits-per-pixel or such
things. 101376 bytes per field.
-d1 video contains two fields per frame. Those sum up to 202752 bytes per
+d1 video contains two fields per frame. Those sum up to 202752 bytes per
frame, and one of those frames goes into each buffer.
-But wait a second! -b128 gives 128kB buffers! It's not possible to cram
+But wait a second! -b128 gives 128kB buffers! It's not possible to cram
202752 bytes of JPEG data into 128kB!
-This is what the driver notice and automatically compensate for in your
+This is what the driver notice and automatically compensate for in your
examples. Let's do some math using this information:
-128kB is 131072 bytes. In this buffer, we want to store two fields, which
-leaves 65536 bytes for each field. Using 3168 blocks per field, we get
-20.68686868... available bytes per block; 165 bits. We can't allow the
-request for 256 bits per block when there's only 165 bits available! The -q50
-option is silently overridden, and the -b128 option takes precedence, leaving
+128kB is 131072 bytes. In this buffer, we want to store two fields, which
+leaves 65536 bytes for each field. Using 3168 blocks per field, we get
+20.68686868... available bytes per block; 165 bits. We can't allow the
+request for 256 bits per block when there's only 165 bits available! The -q50
+option is silently overridden, and the -b128 option takes precedence, leaving
us with the equivalence of -q32.
-This gives us a data rate of 165 bits per block, which, times 3168, sums up
-to 65340 bytes per field, out of the allowed 65536. The current driver has
-another level of rate limiting; it won't accept -q values that fill more than
-6/8 of the specified buffers. (I'm not sure why. "Playing it safe" seem to be
-a safe bet. Personally, I think I would have lowered requested-bits-per-block
-by one, or something like that.) We can't use 165 bits per block, but have to
-lower it again, to 6/8 of the available buffer space: We end up with 124 bits
-per block, the equivalence of -q24. With 128kB buffers, you can't use greater
+This gives us a data rate of 165 bits per block, which, times 3168, sums up
+to 65340 bytes per field, out of the allowed 65536. The current driver has
+another level of rate limiting; it won't accept -q values that fill more than
+6/8 of the specified buffers. (I'm not sure why. "Playing it safe" seem to be
+a safe bet. Personally, I think I would have lowered requested-bits-per-block
+by one, or something like that.) We can't use 165 bits per block, but have to
+lower it again, to 6/8 of the available buffer space: We end up with 124 bits
+per block, the equivalence of -q24. With 128kB buffers, you can't use greater
than -q24 at -d1. (And PAL, and 704 pixels width...)
-The third example is limited to -q24 through the same process. The second
-example, using very similar calculations, is limited to -q48. The only
-example that actually grab at the specified -q value is the last one, which
+The third example is limited to -q24 through the same process. The second
+example, using very similar calculations, is limited to -q48. The only
+example that actually grab at the specified -q value is the last one, which
is clearly visible, looking at the file size.
--