Project Ideas

Idea 1: Tracking using PTZ Raspberry Pi camera

Light-weight tracking of an individual in an indoor environment can be accomplished now using the Raspberry Pi camera. Equipped by a pan/tilt servo mechanism, the camera can be rotated to track a person throughout an environment. The project aims to build a system that can automatically calibrate, figure out entrance/exit points and track one person in an indoor environment for as long as possible. The implementation will be running the Raspbian OS, a lightweight flavour of Linux based on Debian. Previous knowledge in Raspberry Pi would be helpful. Computer Vision algorithms using OpenCV API will be implemented for object detection and tracking.

Idea 2: Detecting Solar Photovoltaic Panels from Aerial Images

Solar Panels, made of a set of solar photovoltaic solar models, are increasingly used in different parts of the world as a source of renewable energy. This project intends to assist a survey of the chronological increase in the deployment of these panels on roof tops, by automatically detecting and segmenting the solar panels from historical aerial images.

Using the highest resolution from Google maps, the student is expected to develop a system that takes in a set of images, detects the solar PVs from roof tops, and returns an estimate of the number and ratio of houses/buildings that deploy solar PVs.

The project requires prior knowledge of Computer Vision and Machine Learning

Idea 3: Object region segmentation

While most object recognition methods are concerned with recognising an individual object or class of objects (car, chair, cup, ...) within clutter, identifying the presence of an object - without figuring out its class - is a pre-requisite to solving many problems. This project will focus on finding a definition of "object-ness" and highlighting interest regions in images that are likely to contain an object. This project is research-oriented, and requires good knowledge of image processing techniques.

Idea 4: Google Glass Guidance for Object Manipulations

Given an object, and a set of extracted videos of how people used an object before, the project aims to study and assess the ways guidance can be given to users on the novel Google Glass platform. The student will be expected to develop a Glass app that can interact with a previously-prepared dataset of objects and video snippets. A bed of qualitative and quantitative evaluation measures for human-glass interactions would be implemented, tested and evaluated as part of the project. This project is co-supervised by Dr. Walterio Mayol-Cuevas.

Idea 5: What font is this?

The number of fonts installed by default on any operating system has grown considerably in recent years. Sometimes you see printed text and wonder which font was it written in. Can we build a computer vision/machine learning system that distinguishes fonts visually. Moreover, can this approach scale to hundreds of fonts. A careful look here shows this is not an easy task. A probabilistic approach might be adopted to generate some confidence measure in the answer.