CSTA 2015 – Day 1: Media Computation in Python

Sunday 12  July – Computer Science Teachers Association Workshop – Media Computation in Python

Mark Guzdial and Barbara Ericson, Georgia Tech

This session provided insight into a different approach to learning programming in Python by using it to directly manipulate (at content level) image files and sound files.

Resources at http://coweb.cc.gatech.edu/mediaComp-Teach#Python

It uses Jython – JES – Python implemented in Java to get multimedia functions working across platforms

Images

Images can be manually edited in Jython, programmatically, For example, to de-redden and image we would reduce the red colour value of each pixel

We could define this function:

def decreaseRed(picture):

    for pixel in getPixels(picture):

      value = getRed(pixel)

       setRed(pixel, .5 * value)

Then once it is run,

>>decreaseRed(picture)

>> explore(picture)

will process and display the picture.

We can bring it back with

def increaseRed(picture):

   for pixel in getPixels(picture):

      value = getRed(pixel)

      setRed(pixel, 2 * value)

(except for rounding issues which make halving and doubling not quite reciprocal actions)

If similar functions are defined for Green and Blue then you can wash the picture out.

Subtract each pixel r g and b value from 255 to get negatives

def redscale(picture):

   for px in getPixels(picture):

      r = getRed(px)

      g = getGreen(px)

      b = getBlue(px)

      nucolor = makeColor(r,r,r)

     setColor(px,nucolor)

Chromakey can be coded in a similar way:

  • Open two files, blue screen and background
  • Define “blue” as anywhere where blue value is > red +green values
  • Step through pixels in the image
  • Look at each pixel in the foreground image and if its blue,get the x and y values of that pixel, find the colour values of the corresponding pixel in the background image, and replace those in the foreground with them

Sound

see mediacomputation.org for resources

Can walk though each rectangle of the encoded sound waveform (it is stored as a digitised value of course)

t=makeSound(pickAFile())

will get sound.

To make it louder we can just double each sample level:

def increaseVolume(sound):

   for sample in getSamples(sound):

   value = getSampleValue(sample)

  setSampleValue(sample,2*value)

If we exceed the max numeric value (+ or – 32,000), we clip the sound.

(ie if we set all to 32000 we’d hear high pressure nothing)

def maximiseVolume(sound):

   for sample in getSamples(sound):

    value = getSampleValue(sample)

     if value >= 0:

       setSampleValue(sample, 32767)

     if value < 0:

       setSampleValue(sample, -32767)

Incredibly we can recognise human voices after this processing as frequency is unchanged – humans recognise the voice even when the amplitude is changed.

(thus we can store human voice in really coarse ways – using just two values per sample, or one bit/sample)

Python book available from the mediacomputation.org site

Teacher ebook study

http://ebooks.cc.gatech.edu/TeachCSP-Python/

http://tinyurl.com/Teacher-ebook 

The authors are interested in teachers wanting to participate in the study. This is a really supported way for teachers to get competent in Python, as much as students. The eBook uses the modify through to full writing approach, and includes significant attention to student misconceptions.

Leave a Reply

Your email address will not be published. Required fields are marked *