1. V4L2 is developing at a fast pace to match new faster, higher resolution, more capable hardware. Mobile phones with multiple 1GHz+ CPU cores, gigabytes of RAM, hardware accelerated graphics, capable of driving HD displays, multiple hardware multimedia engines for video processing, compression and decompression, format conversion, capture and streaming is just one product category. This development requires not only new drivers to be developed, but also new core APIs to be designed and implemented. Soc-camera was one of the first APIs to cleanly separate camera interface drivers from sensor drivers. Then the V4L2 subdevice API followed, which has also been adopted by soc-camera. Next Media Controller API has been developed, which is still not supported by soc-camera. With the MC API various hardware blocks of video data processing pipelines like sensors, converters, scalers, video DMA engines implement "entity" objects. Data flow between those entities is performed via connected "pads." Traditionally the whole pipeline has been represented in the user-space by a single video device node. With MC, entities, pads and links, connecting them, more device nodes have been added in the user-space, users can now configure each hardware block in the pipeline independently. To support this a set of pad operations has been added to the subdevice API. Pad operations duplicate many classical subdevice operations like configuring data formats, scaling, cropping, etc., but do it in a more precisely targeted manner. Since multiple user-space applications can open the same device node simultaneously, pad operations use a notion of a file-handle to separate user contexts. Users get their own "playgrounds," where they can try various configurations without actually reconfiguring the hardware. Of course, once a user decides to push a configuration to the hardware, it becomes global.
Redundancy between the classical subdevice and the pad-level APIs make it impractical to implement both of them in subdevice drivers. Currently both kinds of subdevice drivers can be found in the kernel. This makes working with them difficult: camera interface (bridge) drivers, supporting the classical subdevice API cannot use subdevice drivers, implementing the pad-level API and vice versa. To alleviate this problem a
wrapper has been proposed and posted to the linux-media mailing list. This is just the first shot and will definitely require a few more revisions, before it can be pushed to the mainline.
2. Since soc-camera doesn't (yet) support pad-level operations, video pipeline configuration from the user-space is performed in the traditional way - via a single video device node. E.g. the user has only a possibility to request a global scaling, without being able to tell, which unit in the pipeline should perform the scaling. If both the camera sensor and the camera SoC interface can scale video, the decision who should do this can easily become non-trivial. Same applies to cropping. The sh_mobile_ceu_camera soc-camera driver for r-/sh-mobile ARM and SuperH SoCs implements a rather advanced algorithm to make such decisions and to perform the required configuration. A large part of that algorithm is, however, hardware-neutral and can be made to a generic scaling-cropping helper library. The first
version of such a library has been posted to the mailing list. It is hoped, that at least the emerging VIN driver will be using it instead of duplicating the functions.