Monday, April 27. 2009, 12:37 PM
Following patches have been pushed to v4l-dvb hg:
01/08: ARM: update pcm990-baseboard.c to match mainline
02/08: soc-camera: add a free_bus method to struct soc_camera_link
03/08: soc-camera: host-driver cleanup
04/08: soc-camera: remove an extra device generation from struct soc_camera_host
05/08: soc-camera: simplify register access routines in multiple sensor drivers
06/08: soc-camera: link host drivers after clients
07/08: mx3_camera: Fix compilation with CONFIG_PM
08/08: pxa_camera: Documentation of the FSM
Thursday, April 23. 2009, 06:16 PM
After a long fight trying to find a combination of parameters, that would satisfy the Microsoft PTP driver, using logs recorded from real Canon digital cameras, it turned out, that the line "T is a constant character" in the DateTime string definition in the PIMA 15740:2000 standard should actually be interpreted "the literal 'T' character." Adhering to this at last made Windows happy. This achievement has been proudly marked with tag v1.0. On the way to this milestone ptp-gadget has also acquired a DCF filesystem, all images are now reported under /store_00010001/DCIM/100LINUX/, this, however, is not a complete hierarchical filesystem implementation. Subdirectories are still not supported, the presented directory structure is absolutely static.
Monday, April 6. 2009, 07:31 PM
Completed step 1 of soc-camera conversion to the v4l2 framework. This means, all drivers and platforms have been converted to a scheme, where camera instances are registered by platforms as platform devices, and the soc-camera registers I2C devices later (see also my earlier posts). I also removed struct device from struct soc_camera_host, making camera instances direct children of host platform devices. In the meantime soc-camera framework covers four architectures: PXA270, SH7722, i.MX31, and i.MX1, is used by 6 in-tree platforms: pcm037, pcm990, em-x270, mioa701, ap325rxa, migor, and supports 6 client devices: mt9m001, mt9m111, mt9t031, mt9v022, ov772x, tw9910.
Wednesday, April 1. 2009, 12:07 AM
Following patches have been pushed:
01/09: pxa_camera: Enforce YUV422P frame sizes to be 16 multiples
02/09: pxa_camera: Remove YUV planar formats hole
03/09: pxa_camera: Redesign DMA handling
04/09: pxa_camera: Fix overrun condition on last buffer
05/09: pxa-camera: simplify the .buf_queue path by merging two loops
06/09: ov772x: wrong pointer for soc_camera_link is modified
07/09: soc-camera: fix breakage caused by 1fa5ae857bb14f6046205171d98506d8112dd74e
08/09: mt9m001: fix advertised pixel clock polarity
09/09: ov772x: add edge contrl support
Friday, March 27. 2009, 06:07 PM
The project has been announced on the
linux-usb mailing list in this
post. As described in the headline article to this category, currently only downloading of images and thumbnails is supported. It has been positively tested with Linux hosts, running various distributions and gphoto2 versions, as well as with a Mac OS X tiger host. However, tests with Windows systems are still unsuccessful.
Saturday, March 21. 2009, 10:21 AM
The first step of the soc-camera framework conversion to the v4l2-device API is the transformation of the soc-camera core driver to a platform driver. This way the bond between soc-camera and single (sensor, decoder,...) I2C device drivers is becoming weaker and the double probing of those I2C drivers is eliminated.
The reason for the double probing previously has been, that many camera sensors have their I2C interfaces disabled in the absence of the master clock. Therefore the actual I2C probing routine could not access the chip and only registered the device with the soc-camera framework and reported success unconditionally. Then, when the soc-camera framework found a match between a camera device and a host, it registered respective devices and the second stage probing has been initiated, which switched the master clock of the camera interface on and made the actual I2C probing possible.
With the new platform-device approach first soc-camera core probe method is called, which switches the master clock on, and when the camera device driver becomes available, its I2C probe method is called and it can immediately access device I2C registers and perform the actual device identification and initialisation.
The first part of this conversion has been to convert a specific configuration to this scheme. The chosen configuration is the i.MX31 host and a MT9T031 camera, which have been successfully converted and an RFC patch has been posted to the V4L mailing list.
Saturday, March 14. 2009, 09:11 PM
Following patches have been pushed:
01/12: soc-camera: separate S_FMT and S_CROP operations
02/12: soc-camera: configure drivers with a default format on open
03/12: sh-mobile-ceu-camera: set field to the value, configured at open()
04/12: soc-camera: configure drivers with a default format at probe time
05/12: ov772x: use soft sleep mode in stop_capture
06/12: video: use videobuf_waiton() in sh_mobile_ceu free_buffer()
07/12: soc-camera: add board hook to specify the buswidth for camera sensors
08/12: pcm990 baseboard: add camera bus width switch setting
09/12: mt9m001: allow setting of bus width from board code
10/12: mt9v022: allow setting of bus width from board code
11/12: soc-camera: remove now unused gpio member of struct soc_camera_link
12/12: mt9t031 bugfix
Saturday, March 14. 2009, 08:44 PM
Until now we only used JamVM in single-threaded mode, because firstly, saving and restoring of Java-context has not been supported in the kernel, secondly, the kernel oopsed reproducibly but at various places when multiple jamvm threads were running. Finally I found time to study this problem, and have "recognized its nature." It turned out to be the same old problem with "rotated" registers. JEM implements Java Operand Stack in the first 8 general purpose registers, and pushes and pops them using some internal rotation. When such a rotation is in effect, access to single and to multiple registers actually hit different registers. For example, a "load multiple" (LDM) from registers r0-r7 returns values v0-v7, but reading r0 using MOV or another single-register instruction can actually deliver v1, or any other value, depending on the number of elements on Java Stack. This rotation is enabled always as long as the R bit in the Status Register is set. It has been known since early days of JEM development, that this bit does not get automatically cleared when leaving JEM mode. A thread that once has entered JEM has the R bit set for its entire life-time. Only when switching into the supervisor (kernel) mode or to a different thread the Status Register gets replaced and the R bit is cleared. This is the reason we have to wind and unwind the Java Operand Stack on every entry to and exit from the JEM mode. It now turned out, that also other threads in JamVM process get the R bit set! And not only once at their creation time. Clearing the R bit at fork time is not enough - it gets set again and again! Unfortunately, it is still absolutely unclear_how_ and where it gets set. A workaround has been implemented to clear the R bit upon every context switch, if the thread we are switching to is not going to run in JEM mode. This eliminated oopses for now and allows us to further work on multithreading support in JEM.
Monday, March 2. 2009, 10:34 AM
On a simple "Hello, World" test we are still about 1 second behind the pure software JamVM version, which makes performance optimizations one of our primary goals. We started by counting all occurring traps and measuring time spent in them. This provided us with some interesting and valuable information, which we
presented at FOSDEM this year. Next we want to find out exactly which code fragments take most time, so, we used oprofile. For this we had to upgrade to the current version 2.3.0 of the AVR32 buildroot. Now we've got some preliminary results, which seem to point at UTF-8 string hashing, and at memory allocation in the kernel.
Sunday, March 1. 2009, 07:13 PM
I originally developed the soc-camera V4L2 API in 2007-2008 as a part of a customer project, in 2008 I became the maintainer of the API in the Linux kernel. This API supports video interfaces on various Systems on Chip (SoC) and multiple video sources like CMOS sensors, TV-decoders. This API is in the mainline Linux kernel since version 2.6.25 (released in July 2008), it currently supports PXA270, SH7722, and i.MX31 SoCs, various video source devices from Aptina (formerly Micron), OmniVision, Techwell, out-of-tree patches for i.MX27 and PXA310 SoCs exist. The typical use of this API is on embedded video-enabled devices running Linux.
After the completion of the original customer project, development and maintenance of the API continued on an unpaid voluntary basis. As the API evolves and support for more hardware platforms and software standards is added work on supporting the API grows and it is becoming increasingly difficult to continue the work in my free time.
One of the bigger pending tasks in this area is integrating the soc-camera API with the new v4l2-device framework. This integration would allow seamless reuse of drivers for video data source components between embedded systems and drivers for USB and PCI video cards. For example, a USB camera using an MT9M111 sensor from Aptina would be able to use the same driver as an embedded PXA270 system with this sensor directly connected to the SoC.
Other planned tasks include support for the AVR32 (AP700x) and AT9 SoCs from Atmel, other PXA SoCs from Marvell, support for more features in existing sensor drivers. Maintenance of the API, which includes review and integration of patches submitted by single authors, keeping in sync with constantly developing kernel and video APIs is also taking a considerable amount of time. For example, soc-camera contribution to the 2.6.28 Linux kernel consisted of about 40 patches.
This increasing amount of work motivated me to look for sponsors. Sponsorship can be performed on a task or time basis. A sponsor can volunteer to finance the whole soc-camera work based on my reports and bills, describing work performed and hours spent. Sponsorship offers for specific parts of the work or with limited budgets would also be gratefully considered. Specific Linux kernel and user-space system development orders in video or other areas are also possible. Offers can be directed to
web@liakhovetski.de.
Sunday, March 1. 2009, 06:58 PM
Welcome to the
blog of Guennadi Liakhovetski. This site is going to host development status of various my projects. Hope this is going to be useful for others.