A key property of the brain's computation is neural processing of visual information. The human brain can rapidly recognize thousands of objects while using less power than modern computers use in “quiet mode”. A potential key component enabling this remarkable performance is the use of sparse representations – at any given moment, only a tiny fraction of the visual neurons fire. In this proposed research we wish touse machine learning tools and advanced optimization techniques in order to learn sparse, hierarchical representations that can enable real-time, high-quality visual recognition. The work goes far beyond the standard "dictionary learning" for low-level image patches in that it seeks to go all the way from the analog signal to high-level recognition with sparse, hierarchical representations, and to develop novel, efficient algorithms that can take advantage of the sparsity in the representation.
The research team includes experts in machine learning, sparse optimization, sampling, neural computation and object recognition The research team is lead by Prof. Yair Weiss and Prof. Daphna Weinshall (HUJI), the team will develop new algorithms for learning sparse, hierarchical representations and for using these representations in visual recognition on a small power budget.
The main outcomes of this research project are algorithms for learning sparse hierarchical representations and ways for using these representations in visual recognition on a small power budget. Most of the first year will be used for initial sketches of the approach including the development of rudimentary algorithmic solutions, while subsequent years will be used to continuously improve the approaches using a cycle of experimentation and theory.
- I. Eyal, F. Junqueira, and I. Keidar, "Thinner Clouds with Preallocation". HotCloud'13.
Yair Weiss ➭
Daphna Weinshall ➭
Yonina Eldar ➭
Ron Meir ➭
Amnon Shashua ➭