[Adam Taylor] always has interesting FPGA posts and his latest is no exception. He wanted to use a Zynq for image processing. Makes sense. You can do the high-speed parallel parts in the FPGA fabric and do higher-level processing on the built-in CPU. The problem is, of course, you need to get the video data into the system. [Adam] elected to use the Mobile Industry Processor Interface (MIPI) Camera Serial Interface Issue 2 (CSI-2).
This high-speed serial interface is optimized for data flowing in one direction. The camera, or the master, sends a number of bits (at least one) serially with one clock. To increase speed, data transfers on both rising and falling clock edges. The slave also has a pretty standard I2C master to send commands to the camera which, for the purposes of I2C, is the slave.
In theory, with one lane of data on the D-PHY, you can get up to 4.5 Gbps on four wires, although you might get less with an FPGA. [Adam’s] post quotes different numbers, but also mentions the FPGA won’t get there anyway. One of the chips used supports the D-PHY directly on the chip, but the standard Zynq does not. Even when using IP, you have to understand this and make the appropriate choices and that’s the main point of the post.
It does illustrate how even using IP isn’t as plug-and-play as, say, hooking up home stereo equipment. You still have to understand what’s going on and how to best work with the IP — in this case both on the outside interface and the inside one.