Using "Internet-of-Things" cameras to create timelapse videos

Using "Internet-of-Things" cameras to create timelapse videos

The following is a technical post about using the Raspberry Pi and a Raspberry Pi camera to create automated timelapse videos. Since this project was part of a larger project to develop an "internet of things" science kit, the goal was to have the timelapse video triggered by an MQTT message.

OK, so I don't currently have a really good reason to use MQTT to start a timelapse. I could have just ssh'd to the Pi and started a timelapse script that way. But I didn't, mainly because I wanted to move toward IoT cameras as part of my IoT-in-a-Box project. Because, who knows? Maybe one day I'll really want to trigger a timelapse with humidity data. Or trigger a photo with a capacitive touch sensor.

Here is a timelapse I made of green onions growing:

Green Onions

Notes on Camera Setup

I had to adjust the focus of the Raspberry Pi camera. The focal point was set to infinity, and I wanted it to be about a foot away, where my green onions would be. This involved screwing in a tiny little lens mount made of one of the most apparently malleable plastics on Earth. I turned it using an eyeglass screwdriver, but it turns out they make a tool for it, which I fully recommend without ever having tried.

I mounted the camera on an unstable little tripod, using the Pi camera mount:

Timelapse setup with Raspberry Pi, camera, and green onions.

MQTT camera control script

This script connects to an MQTT server (I have a mosquitto server also running on the Pi) and subscribes to the topic "rpi-camera/command". When it gets a command message, it either takes a photo, or takes a series of photos spaced at some time interval.

To run it, first install the paho-mqtt library:

 pip3 install paho-mqtt

Then create and run the following script as "mqtt-camera-controller.py" on the Raspberry Pi:

#!/usr/bin/env python
# -*- coding: utf-8 -*-

import paho.mqtt.client as mqtt
import json
import os
from picamera import PiCamera
from time import sleep

#set up camera
camera = PiCamera()

def take_still(output):
        camera.resolution = (1024, 768)
        camera.capture(output)
        print("Saved a still photo to " + output)

#set up MQTT client
clientName = "RPi-camera-client"
serverAddress = "10.0.0.150"

mqttClient = mqtt.Client(clientName)

def on_connect(client, userdata, flags, rc):
        mqttClient.subscribe("rpi-camera/command")
        print("Connected!")

def on_message(client, userdata, msg):
        payload = json.loads(msg.payload.decode('utf-8'))

        if payload['command'] == "still":
                output = payload['output']
                take_still(output)

        elif payload['command'] == 'timelapse':
                output = payload['output']
                number = payload['number']
                seconds_delay = payload['seconds_delay']

                #make folder 'output' to hold images
                print("Timelapse images stored in folder " + output)
                if not os.path.exists(output):
                        os.makedirs(output)

                for x in range(0, number):
                        print("Taking photo number %d" % x)
                        take_still("./{}/image_{}.jpg".format(output,x))
                        sleep(seconds_delay)
                print("Timelapse complete!")

        else:
                print("Unknown message!")

mqttClient.on_connect = on_connect
mqttClient.on_message = on_message

#start MQTT client
mqttClient.connect(serverAddress)
mqttClient.loop_forever()

A note on running the script

You can ssh to the Raspberry Pi to run it, but it will stop once you close your ssh connection. Instead, start the script with:

sudo nohup ./mqtt-camera-controller.py & exit

This tells the script to ignore the message that the ssh connection is closed. It's useful to be able to check on your running nohup processes later with:

ps -ef |grep nohup

This will give you the Process IDs of any 'nohup' processes. Then you can kill one with a specific process ID:

kill -9 [PID]

(I was curious and looked it up: the -9 specifies that it should send a "kill signal", rather than other sorts of signals like a "hangup" or "interrupt".)

Send a command

Now that the camera controller is running, you can send it commands. To start a timelapse, create the following script in a file take-timelapse.py, and edit the values for the MQTT server setup and timelapse parameters:

#!/usr/bin/env python
# -*- coding: utf-8 -*-

import paho.mqtt.client as mqtt
import json

#set up MQTT client
clientName = "timelapse-script"
serverAddress = "10.0.0.150"
MQTT_TOPIC = "rpi-camera/command"
MQTT_MSG = json.dumps({"command": "timelapse", "params":{"name":"awesome-timelapse", "number":5, "seconds_delay":5}})

def on_connect(client, userdata, flags, rc):
        print("Connected to MQTT Server!")
        client.subscribe(MQTT_TOPIC)
        client.publish(MQTT_TOPIC,MQTT_MSG)

def on_publish(client, userdata, res):
        print("Message published!")

mqttClient = mqtt.Client(clientName)
mqttClient.on_connect = on_connect
mqttClient.on_publish = on_publish

#start MQTT client
mqttClient.connect(serverAddress)
mqttClient.loop_start()
sleep(1)
mqttClient.loop_stop()
mqttClient.disconnect()

This will create a series of images in a folder called "awesome-timelapse". If you want to take a single photo instead, replace the message sent with:

{"command": "still", "output":"mediocre-image.jpg"}

Useful image processing resources

To do stuff with the images, first install the imagemagick library:

sudo apt-get install imagemagick

To create a .gif from a set of images, run this command (from within the directory holding all your images):

convert -delay 15 -loop 0 image*.jpg timelapse.gif

To add a set images to the end of an existing .gif:

convert -delay 15 -loop 0 timelapse.gif image*.jpg timelapse2.gif

To rotate an image clockwise:

convert -rotate "90" in.jpg out.jpg

To resize, 'mkdir output_folder', then:

mogrify -path output_folder -resize [NEW_WIDTH_IN_PIXELS] *.jpg

Script for batch processing (here for rotating/renaming):

for f in *.jpg; do
    convert -rotate "270" "$f" "${f%.jpg}_rotated.jpg"
done

Done

I'm pretty pleased that I can now trigger camera actions with MQTT messages. It appeals to me that nearly anything in my house could be wired up to trigger a camera. Maybe once I get my ESP-8266 button finished, I will create a little camera remote control.

Right now I only really plan on using this to create more timelapses of things growing, but I think there could be other, more technically interesting projects to make out of it. I can imagine a general sort of use where you (intermittently) care where some objects are (or whether they're within the view of the camera), or want some information about some object that you could get by looking at it. For instance, if I'm at the grocery store and want to know the contents of my refrigerator (or how much milk I have left), I might want to trigger a photo, do some processing, and send back a response. If I were using this to augment a workspace, then when I wanted to switch tasks or clean up for the day, I could trigger an image of my desktop, and record the current set and layout of the objects so that I can more easily remember and return to what I was doing.

Except... things I don't understand yet

My green onions timelapse above was created with photos taken a half hour apart. After running the script for the first time, I noticed that it had stopped after collecting image_94.jpg. In fact, every time I did this it would stop at image_94.jpg, and I would have to restart (which is why the .gif is choppy).

Initially I thought this must be some sort of storage space limitation, like on how many files or how much data can be stored in a single folder. So I would move all the files out of that folder and restart the script, and it would collect 94 more images, and then I'd do it over again. Sometimes I removed files before it hit image_94, but it didn't help.

But I realized that image_94.jpg is really the 95th image (because of image_0.jpg), which means that the program is failing to take its 96th image, or failing right at 48 hours. And that seems a little too coincidental to not be some sort of time-related failure. But I can't find any reason that the process would just fail right at 48 hours-- no nohup, nothing in nohup.out, and I need to reboot the Pi to be able to run the nohup command.

So I'm leaving the process running now without actually telling it to start taking photos, and will see if it fails at 48 hours! Will update.