In the last post on my autonomous racing helicopter I managed to get the OV9650 camera communicating in a most basic way over the I2C bus. In order to actually capture images from it, the next step is to get a linux kernel driver going which can connect up the DM3730’s ISP (Image Signal Processor) hardware and route the resulting data into user space.
After some preliminary investigation, I found the first major problem. In the linux tree as of 3.6, there is a convenient abstraction in the video4linux2 (V4L2) universe for system on chips where the video device as a whole can represent the entire video capture chain, but each new “sensor” only needs a subdevice created for it. And there are in fact a lot of sensors already existing in the mainline kernel tree, including one for the very related OV9640 camera sensor. The downside however, is that there are two competing frameworks that sensor drivers can be written to, the “soc-camera” framework and the raw video4linux2 subdevice API. Sensors written for these two frameworks are incompatible, and from what I’ve seen, each platform supports only one of the frameworks. Of course, the omap3 platform which is used in the DM3730 only supports the video4linux2 API whereas all the similar sensors are written for the “soc-camera” framework!
Laurent Pinchart, a V4L2 developer has been working on this effort some, but I had a hard time locating a canonical description of the current state of affairs. Possibly the closest thing to a state of the soc-camera/v4l2 subdev universe can be found in this mailing list post:
Subject: Re: hacking MT9P031 for i.mx From: Laurent Pinchart <laurent.pinchart@xxxxxxxxxxxxxxxx> Date: Fri, 12 Oct 2012 15:11:09 +0200 ... soc-camera already uses v4l2_subdev, but requires soc-camera specific support in the sensor drivers. I've started working on a fix for that some time ago, some cleanup patches have reached mainline but I haven't been able to complete the work yet due to lack of time. -- Regards, Laurent Pinchart
Since I don’t have a lot of need to upstream this work, I took the easy route and started with an existing v4l2 subdevice sensor, specifically the mt9v034 which is in the ISEE git repository. I copied it and replaced the guts with the ov9640 “soc-camera” driver from the mainline kernel. After a number of iterations I was able to get a driver that compiled, and appeared to operate the chip.
To test, I have been using Laurent Pinchart’s media-ctl and yavta tools. The media controller framework provides a general way to configure the image pipelines on system on chips. With it, you can configure whether the sensor output flows through the preview engine or the scaler and in what order if any. yavta is just a simple command line tool to set and query V4L2 controls and do simple frame capture.
One of the first images with correct coloring is below. Single frame capture out of the box was recognizable, but the quality was pretty poor. Also, as more frames were captured, the images became more and more washed out, with all the pixel values approaching the same bright gray color. That problem I will tackle in a later post.
