With the fast technology development and the spread access of information on the World Wide Web made it easier to search, upload, download and receive all kinds of personal and public videos. People can find a lot of interesting or useful facial actions they want. This caused people to face the problem of information overflow and be tired to find the facial actions again and again. They need a tool to find all the facial actions which they need from the videos.
Annotation is a human providing manual descriptions of what's in an image (or any other data source, for that matter). For example, you can annotate a face with the location of the eyes, or with the presence of a smile, or with the emotion you think is being displayed. Online annotating service is a tool that helps to annotate different resources on the Internet. The goal of our project is to build an online annotation tool that will allow user to record when certain facial actions (e.g. smiles) occur in a video. This annotation action serves in cutting down the search time of information because the reader doesn’t have to re-watch the whole video. Instead user can only see the pictures that the tool gets from the video and choose the one they like. The tool is to use existing automatic facial expression recognition systems. However, these results are not always satisfied, so the user is able to modify the results that were generated automatically. All results are stored in standards-compliant XML, and the tool allows visualization of the annotation on the videos of facial expressions. Finally, the tool can be easy to use by annotators, the software will find facial actions from the video and show the user when it starts and when it ends. Also the software will have some buttons that can save and cancel the operation that the user is going through. And also it will have fast forward and rewind buttons which will also help the users to find the part they want to look at. It will also have a button that will let the users add the facial expressions manually because sometimes the software cannot find the facial expressions and miss it.