UALG VISION LAB: THE SPARSE CODING PROJECT

A sparse coding hierarchy for realtime object recognition in complex scenes

Keywords: visual perception, models, deep neural networks, associative memory, invariant object recognition, cognitive robotics


FCT project headed by Joao Rodrigues, tentative start 1/1/2014, duration 12 months. Execution: CINTAL.


  • POSTDOC WANTED! Contact Joao Rodrigues or Hans du Buf


  • Brief overview

    The lab has already models of V1 and V2: simple, complex and end-stopped cells for creating multiscale line, edge and keypoint maps. Optimised models work in real time on a GPU; see the keypoints video. These representations are being used to develop deep hierarchies for invariant object detection and recognition. However, such models cannot (yet) be applied on small mobile robot platforms like a Pioneer. Hence, different solutions are being explored.

    This sparse coding project will develop a deep hierarchy, like HMAX-type models consisting of several layers, using a special type of Hopfield or Hamming network with cliques of output neurons: the Gripon-Bessou network. The cliques will make the system robust to noise and missing data, with increasingly sparse coding at the higher levels. In stead of employing the models of simple and complex cells, which are computationally very expensive, a cheaper solution based on Canny edges and Harris corners will be explored. On the robot, the system may be implemented on a CubieBoard or ODROID-U2/X2, but eventually even on neuromimetic hardware with spiking (LIF) neurons: the 48-chip SpiNNaker board from the Manchester group in the Human Brain Project (or the analogue equivalent Spikey from the Heidelberg group).

    POSTDOC: NN expert, preferably with profound knowledge of models of real neurons and recurrent NNs, even with NN simulators (e.g. PyNN).


    Send comments to: jrodrig@ualg.pt or dubuf@ualg.pt

    Go back to the Hans du Buf or Joao Rodrigues homepages or visit the UAlg Vision Laboratory.


    Last update: Nov 2013 - HdB