SoC Architecture Figure 1

SoC Architecture
Figure 1: SoC ArchitectureFigure 1 shows the architecture of the TMS320DM8148 SoC, which has high performance video processor, DSP, codecs and ARM8 RISC core, the main elements details are as follows,
ARM Cortex – A8 core
TMS320C6748 Floating point DSP
High Definition Video Image Co-processing (HDVICP) Engine – Encode, Decode, Transcode operations
HD Video Processing Subsystem (HDVPSS) – Video capture modules
Imaging Subsystem (ISS) – Camera sensor connection and Image sensor interface
Media controller of HDVPSS, HDVICP, ISS
Program/Data Storage – DDR2/3, SATA, MMC/SD/SDIO, GPMC + ELM (Error Locator Modules) Memory interfaces
Serial Communication interfaces – SPI, I2C, UART, McASP (Multi Channel Audio Serial Ports), McBSP (Multi Channel Buffered Serial Port), DCAN
Connectivity – EMAC R/G/MII, MDIO, USB, PCIe
System Controls – RTC, PRCM (Power, Reset, Clock Management), GP Timer, JTAG, WDT, Spin lock, Mail box
This SoC supports different formats of H.264, MPEG-2, MPEG-4, SP/ASP, JPEG/MJEG. This dissertation is made to support for H.264 version for the various video features are as follows,
Video Capture
The Video capture is done by taking input from the multi-channels of video ports, and based on the selected input source
Video Capture

4x Input Video Ports

We Will Write a Custom Essay Specifically
For You For Only $13.90/page!


order now

Video Display
Video Display
Video Decode
The Video display is done by taking input from capture and decode sub-system, then displays on the display devices
Video Capture
Video Input

Video Stream
The Video Stream can be taken from the user selected file, decode and display the output
Video Encoder
Video Encode
Video Capture
Video Input
The Video encoder will take an input from capture and do encoding of H.264 format on the video including sub-stream encode and give the encoded bit stream to the user
Encoded Video Data

Video Decoder
Output Data to be display
Video Decoder
The Video decoder will take an input bit streams of multi-channels from user and provide as input to the display subsystem after decode
Input Bit Stream

Video Basics and H.264 StandardThe H.264 is an video commpression technique which has benefits to use digital video in transmission and storage environmwnts which will not support uncompressed/raw video and the other benefit is video compression enables more efficient use of transmission and storage resources.
The video is a combinations of displaying frame in sequnce at different frequencies or time interval. THe human eye and brain are more sensitive to lower frequencies and so the image is still recognisable, though the fact of ‘information’ has been removed. Example frames displayed in video sequence of a camera at 25 frames per second, i.e., the change between the two frames is of short interval of 1/25 of a second.

Figure 2: Video frame 1, 2 with homogeneous regions
Figure 3: Video frame with low pass filtered backgroundThe human visual system is less sensitive to colour than luminance (brightness). Color space has RGB (Red, Green, Blue) equally important and all stored at same resolution, but the image can be more efficiently represented by seperating the luminace referred as YCbCr or YUV (Luma, Chroma blue, Chroma red).

Figure 4: YUV and RGB representations
Figure 5: Displayed Video Frame
Figure 6: Luma (Y) component
Figure 7: Cr, Cg, Cb componentsYUV sampling formats are shown in Figure 8, for Y, Cb, Cr that are supported by H.264. 4:4:4 smapling indicates the same resoultion, hence a sample of each component exists at every pixel position, and it has relative sampling rate of each component in horizontal direction i.e., for every four luminance samples four Cb, Cr samples.
In 4:2:2 sampling or YUY2, have vertical resolution which contains two Cb, Cr samples for every four luminance. In 4:2:0 or YV12, Cb & Cr each have half horizontal and vertical resolution of Y. This format is very popular and widely used for consumer applications such as digital television, video conferance. example of pixel calculation as follows,
Image resolution: 720 × 576 pixels
Y resolution: 720 × 576 samples, each represented with eight bits
4:4:4 Cb, Cr resolution: 720 × 576 samples, each eight bits
Total number of bits: 720 × 576 × 8 × 3 = 9 953 280 bits
4:2:0 Cb, Cr resolution: 360 × 288 samples, each eight bits
Total number of bits: (720 × 576 × 8) + (360 × 288 × 8 × 2) = 4 976 640 bits
The 4:2:0 version requires half as many its as the 4:4:4 version.
Figure 8 shows the sampling of YUV types,
4:2:0 sampling
4:2:2 sampling
4:4:4 sampling
Figure 8: Types of YUV SamplingThe chances between the video frames will be caused by object motion like a moving vehicle. The trajectory of each pixel can be estimated between successive video frames as shown below.

Figure 9: Video frame 1

Figure 10: Video frame 2
Figure 11: Video frame differenceAs shown in Figure 12, Figure 13, the Macro block based, motion comparison is performed between the reference and the current frame.

Figure 12: Macro block
Figure 13: Motion estimationSoftware of SoCSoftware Components are distributed across processors to share the processing as shown in Figure 14,
VPSS M3 – is used for Video capture, display, scaling, de-interlacing
Video M3 – is used H.264, MPEG4 encode/decode
DSP – is used for additional SW based video processing and video analytics
ARM8 – is used for system control, GUI, SATA, Ethernet, USB and other IO
The software is built using a multi-processor structure called “Links and chains”. This structure is made and optimized for multi-channel video applications where multiple video frames need to be exchanged across various video processing tasks. The internal interface to this structure is called “Link API”.

Figure 14: Software of SoCThe key components are different CPUs, Operating Systems, libraries, Video Codecs, Drivers, BIOS and boot loaders. The involved are:
ARM8/HOST A8
This is an ARM Cortex A8 CPU runs with an Linux operating system, U-Boot Boot loader. This processor performs initialize of SoC interfaces, triggering the Video Codecs and DSP processor based on the Video operation chosen by user, pass the given parameters information over an interface called Syslink/IPC of

x

Hi!
I'm Mila

Would you like to get a custom essay? How about receiving a customized one?

Check it out