Pythonstuff GLSL in English Pythonstuff GLSL auf Deutsch Pythonstuff GLSL Pythonstuff
PythonStuff Home
 

 

Adding Realtime Video from an USB Camera to a Texture

This is a nice one, even if it currently only works on windows (working on it !).

It is a small addition to Example 8 - Environmental Bump Mapping with GLSL Cube Mapping.

The program

The Textures used in this demo are here:

  • Data20.zip File (unpack the directory “data20” in the directory where you keep the “video_reflection.py” file)

The code uses the marvellous Library VideoCapture.py by Markus Gritsch - for windows.

If you know a simple Python interface to Video4Linux, I would gladly integrate it. As I have not found it yet, I am starting with ctypes programming

The Museum environment texture is taken from http://local.wasp.uwa.edu.au/~pbourke/miscellaneous/stereographics/stereopanoramic/, curtesy by Peter Murphy.

And here is the resulting video - featuring a nice looking hamster:

Program description

A simple demo program to check that the library works would be

from VideoCapture import Device
cam = Device()
cam.saveSnapshot('image.jpg')

Three lines of code to get a camera shot and save it to disk - isn't that cool? There are a lot of options and functions to handle all the special cases that could arise, but a useful program is surprisingly easy to set up:

Camera initialization

from VideoCapture import Device
try:
    cam = Device()
except:
    print "No Camera device found"
    cam = None

Frame Grabbing

For every OpenGL-Frame we render, we check for a new video frame from an USB camera. If we get an image (some 30-60 times per second not in sync with the OpenGL rendering), we apply the image to the Cubemap Texture with glTexSubImage2D(), so we transfer only the relevant part of the texture to the graphics card:

if cam:                       # camera object was successfully initialized
    snap = cam.getImage()
    if snap:                  # a new image is ready from the camera
        snapstr = snap.tostring( "raw", 'RGBX')
        glTexSubImage2D( GL_TEXTURE_CUBE_MAP_NEGATIVE_Z, 0,
                         0, 160,                        # position within the texture
                         snap.size[0], snap.size[1],    # size of the image
                         GL_RGBA, GL_UNSIGNED_BYTE,
                         snapstr )

Very easy, once you figured it out :-)

The rest of the program is heavily based on the previous example, look there for information on Cube Mapping.

The possibilities are endless - you could even analyze the image from the camera with PIL and let your OpenGL model react to it - what about a pair of eyes not only reflecting what they look at, but also follow your every movement ?


Deutschsprachige Version, Start
Impressum & Disclaimer