This blog will now be available at http://blog.hpenedones.org
I don't have plans to continue posting here, so please update your bookmarks to the new address.
In addition, my new homepage is now hosted at http://hpenedones.org
Thanks,
Hugo Penedones
Sunday, July 14, 2013
Tuesday, November 20, 2012
Machine Learning Workshop - Idiap EPFL 2012
Yesterday I attended to this workshop at EPFL:
http://www.idiap.ch/workshop/mlws/
It was a good opportunity to see old friends and colleagues, and listen about their latest research. In general, the quality of the talks was quite good, ranging from very theoretical machine learning (sparse coding, optimization, etc.) to commercial applications of computer vision (www.faceshift.com).
Somewhere in the middle of that spectrum, I also quite liked the talk about learning image local descriptors (BRIEF and LBGM) as a compact and efficient alternative to SIFT or SURF, which are hand-designed, slower and use more bits. There were also applications to speech, face analysis and even remote sensing.
Have a look at the program and keep an eye on it in the coming days, as the slides will probably become available. You will find several other interesting talks:
http://www.idiap.ch/workshop/mlws/programme-2012
http://www.idiap.ch/workshop/mlws/
It was a good opportunity to see old friends and colleagues, and listen about their latest research. In general, the quality of the talks was quite good, ranging from very theoretical machine learning (sparse coding, optimization, etc.) to commercial applications of computer vision (www.faceshift.com).
Somewhere in the middle of that spectrum, I also quite liked the talk about learning image local descriptors (BRIEF and LBGM) as a compact and efficient alternative to SIFT or SURF, which are hand-designed, slower and use more bits. There were also applications to speech, face analysis and even remote sensing.
Have a look at the program and keep an eye on it in the coming days, as the slides will probably become available. You will find several other interesting talks:
http://www.idiap.ch/workshop/mlws/programme-2012
Monday, November 12, 2012
Active Appearance Models
Lately, I have been working with Deformable Models and I am surprised by how well they can work.
In the video above I am using an Inverse Compositional Active Appearance Model, which was trained with images of myself. It's specially tuned for my face, but I still find it quite impressive how well it can track my face in realtime!
On the other hand, this model is quite sensitive to lighting conditions and partial occlusions. Training it, is also somehow of an art, because, as opposed to discriminative models, increasing the amount of training data might actually decrease performance. This happens because we use PCA to learn the linear models of shape and texture, which will degrade if data has too much variation or noise.
Still, it's quite impressive what one can achieve by annotating a few images (about 50, in this case). In addition, as one annotates images, one can start training models that will help us landmark the next ones (in a process of "bootstrapping", similar to the one in compilers).
Friday, November 12, 2010
The AI set of functions
I recently read an article from Y. Bengio and Y. LeCun named "Scaling Learning Algorithms to AI". You can also find it as a book chapter in "Large-Scale Kernel Machines"L. Bottou, O. Chapelle, D. DeCoste, J. Weston (eds) MIT Press, 2007.
In some aspects it is an "opinion paper" where the authors advocate for deep learning architectures and their vision of the Machine Learning. However, I think the main message is extremely relevant. I was actually surprised to see how much it agrees with my own opinions.
Here is how I would summarize it:
- no learning algorithm can be completely universal, due to the "No free lunch theorem"
- that's not such a big problem: we don't care about the set of all possible functions
- we care about the "AI set", which contains the functions useful for vision, language, reasoning, etc.
- we need to create learning algorithms with an inductive bias towards the AI set
- the models should "efficiently" represent the functions of interest, in terms of having low Kolmogorov complexity
- researchers have exploited the "smoothness" prior extensively with non-parametric methods. However many manifolds of interest have strong local variations.
- we need to explore other types of priors, more appropriate to the AI set.
The authors then give examples of two "broad" priors, such as the sharing of weights in convolutional networks (inspired by translation invariance in vision) and the use of multi-layer architectures (which can be seen as levels of increasing abstraction).
Of course here is where many alternatives are open! Many other useful inductive-bias could be found. That's where I think we should focus our research efforts! :)
In some aspects it is an "opinion paper" where the authors advocate for deep learning architectures and their vision of the Machine Learning. However, I think the main message is extremely relevant. I was actually surprised to see how much it agrees with my own opinions.
Here is how I would summarize it:
- no learning algorithm can be completely universal, due to the "No free lunch theorem"
- that's not such a big problem: we don't care about the set of all possible functions
- we care about the "AI set", which contains the functions useful for vision, language, reasoning, etc.
- we need to create learning algorithms with an inductive bias towards the AI set
- the models should "efficiently" represent the functions of interest, in terms of having low Kolmogorov complexity
- researchers have exploited the "smoothness" prior extensively with non-parametric methods. However many manifolds of interest have strong local variations.
- we need to explore other types of priors, more appropriate to the AI set.
The authors then give examples of two "broad" priors, such as the sharing of weights in convolutional networks (inspired by translation invariance in vision) and the use of multi-layer architectures (which can be seen as levels of increasing abstraction).
Of course here is where many alternatives are open! Many other useful inductive-bias could be found. That's where I think we should focus our research efforts! :)
Monday, November 8, 2010
Tutorial: handwritten digit recognition with convolutional neural networks
Saturday, October 23, 2010
NYC Machine Learning Symposium 2010
The event took place yesterday at the New York Academy of Sciences, a building right next to the World Trade Center. The views from the 40th floor were breathtaking:
The names of the participants in the room was no less impressive, (by no special order): Corinna Cortes (Google), Rob Schapire and David Blei (Princeton University), John Langford and Alex Smola (Yahoo), Yann LeCun (NYU), Sanjoy Dasgupta (Univ. California), Michael Collins (MIT), Patrick Haffner (AT&T), among many others.
I particularly liked to see the latest developments in LeCun's group, including a demo by Benoit Corda and Clément Farabet on speeding-up Convolutional Neural Networks with GPUs and FPGAs.
Alex Kulezka and Ben Taskar had a nice work on "Structured Determinantal Point Processes", which can be seen as a probabilistic model with a bias towards diversity of the hidden structures.
Mathew Hoffman (with D. Blei and F. Bach) used stochastic gradient descent (widely used among neural network community) for online training of topic models. Sean Gerrish and D. Blei actually had a funny application of topic models to the prediction of votes by Senators!
I was also happy to see that there is some Machine Learning being applied to the problem of sustainability and the environment. Gregory Moore and Charles Bergeron had a poster on trash detection in lakes, rivers and oceans.
To conclude, the best student paper award went to a more theoretical paper by Kareem Amin, Michael Kearns and Umar Syed (U Penn) called "Parametric Bandits, Query Learning, and the Haystack Dimension", which defines a measure of complexity for multi-armed bandit problems in which the number of actions can be infinite (there is some analogy to the role of VC-dimension in other learning models).
There were probably many other interesting posters worth being mentioned, but I didn't have the chance to check them all!
On the personal side: my summer internship at NEC Labs with David Grangier is about to finish. It was an amazing learning experience and I am very grateful for it.
Next step: back to Idiap Research Institute, EPFL and all the Swiss lakes and mountains! :)
Tuesday, July 6, 2010
Machine Learning recent sites
In the last few months (in which I haven't posted in this blog) there were a few interesting web platforms related to Machine Learning poping-up, most notably:
MLcomp.org - you can upload your datasets and/or your algorithms, and experiments will run automatically. Then you can see statistics related to classifier performances and computation times. It is intended to help researchers and practitioners comparing different methods, and it works as a collaborative platform where code and data can be shared.
MetaOptimize.com - it contains a great QA about Machine Learning and related topics, using the same web platform StackOverflow has for programming topics.
I find these two websites a great way to improve collaboration among the ML community. Highly recommended!
The latest link is more market oriented, and it comes from Google:
Google Predict : it puts together well established ML algorithms in an API that developers can use to make predictions on their own datasets.
Tuesday, December 8, 2009
Optimism as Artificial Intelligence Pioneers Reunite
Just a short link to an article of the New York Times about AI.
In 1978, Dr. McCarthy wrote, “human-level A.I. might require 1.7 Einsteins, 2 Maxwells, 5 Faradays and .3 Manhattan Projects.”
I think we probably have the genius scientists around, but not so sure about the 0.3 Manhattan Projects!
Update: You might also want to read latest Shane Legg's predictions about human level artificial intelligence.
Monday, December 7, 2009
TEDx Geneva
Today I assisted to the first edition of TEDx Geneva. This was a locally-organized event following the same spirit of the original TED talks: "ideas worth spreading".
I think the program was really good, because in this region there are some many incredible organizations. He could listen to people from CERN, EPFL, the United Nations, the Red Cross and some independent Swiss adventurers and entrepreneurs. We also had the opportunity to (re)watch some videos of the most popular TED talks recorded in the US.
I think the program was really good, because in this region there are some many incredible organizations. He could listen to people from CERN, EPFL, the United Nations, the Red Cross and some independent Swiss adventurers and entrepreneurs. We also had the opportunity to (re)watch some videos of the most popular TED talks recorded in the US.
All the speakers spoke in English, which in my opinion degraded the level of their presentations, simply because it's not their native language. Even if one is relatively fluent, it's always harder to make jokes and be entertaining. The event was also a bit too long, covering the full day.
Still, I greatly appreciated the experience and recommend it to others!
Wednesday, November 11, 2009
Choosing my tools
I'm doing research in the fields of Machine Learning and Computer Vision, so each time we have an idea for a new algorithm, I have to write code, run experiments and compare results. I have realized that the experimental part is really the bottleneck, we have more ideas than we can test. For this reason, it's critical to chose a good set of tools you can work with. This is a list of my current choices, but I am continuously looking for more efficient tools.
Operating system:
Snow Leopard - In my opinion, Mac OS X has an excellent balance between control and usability. You have beautiful graphical interfaces, that just work, but still have a fully functional Unix shell.
Update: Lately my preference is to use Ubuntu Linux because I have much fewer problems with apt-get than with macports. Sometimes, professionally I also use Windows. It seems that is hard to stick to one OS, when you change project, job, etc.
Text Editor / Programming Environment:
Textmate - again, it's an excellent compromise between simplicity, usability and customizability. You can create your own code snippets (using shell commands, ruby, python and more), but to me it seems much easier to learn than vim or emacs.
Update: Again, went back to the basics, and started using vim and gvim. It is available in all the platforms, there is a much bigger user base and I really like the power of the command mode. In addition, recently I learnt how to write simple vi plugins using python, which literally means I can do whatever I want with my editor.
Programming Language:
C++ - absolute power. So powerful that one must be very careful using it. Some people say, C++ is actually a federation of languages, which includes C, object oriented stuff, templates and standard libraries. Although I've been using it for while, I feel there is always more to learn about it.
Update: In addition to C++ (and C which I really love), I also started using some scripting languages. First I learnt Lua, so that I could use the Torch Machine Learning Library. Then, I started using python, which I really love due to the wide availability of (easily installable) libraries. Ah, I look forward to learn the new C++11 standard, which seems to be quite neat.
Update: In addition to C++ (and C which I really love), I also started using some scripting languages. First I learnt Lua, so that I could use the Torch Machine Learning Library. Then, I started using python, which I really love due to the wide availability of (easily installable) libraries. Ah, I look forward to learn the new C++11 standard, which seems to be quite neat.
Build System (new):
cmake - it's cross platform and simple enough to start using it. I don't know the advanced features, but it's pretty easy to create a project that generates libraries and executables and links properly with other dependencies (like OpenCV).
Source control system:
git - I was using subversion before, but I guess the idea of distributed repositories makes sense. You can work locally and still commit changes that you can synchronize later. So far, I use less than 2% of the commands!
Update: git is definitely here to stay. Now I use private and public hosted repositories with Github or Bitbucket.
Cloud Computing (new):
Amazon EC2 - I also used the IBM Smart Cloud, but Amazon has more features and better APIs. Recently, with the introduction of the spot instances, things also got a lot cheaper when you need to process large amounts of data.
NoSQL Databases (new):
redis - redis is what we can call a "data structure server" and it's probably the nicest piece of software I started using recently. It is just beautiful. Simple. Intuitive. Fast. I can not recommend it enough.
Computer Vision Library:
OpenCV - it's quite useful for the low and intermediate level things (load and save images, convert color spaces, edge detection, SURF descriptors etc.). It also has higher level algorithms, but when you're doing research in the field, these are not so useful. It lacks some object-oriented design, but version 2.0 is starting to move in that direction.
Machine Learning library:
None. Here I'm re-inventing the wheel, because I want to know everything about wheels. I do my own implementations of AdaBoost, EM algorithm, Kmeans and stuff like that. For a nice discussion of code re-use in the machine learning domain, read this discussion at mloss.org
Object Serialization Library:
boost-serialization - I need to save the models to files in order to load them later. If I were using OpenCV for Machine Learning, I could also use the functions they provide for serialization, but I'm not. With boost I can serialize objects to xml or binary format. It's a bit tricky to use, because it uses C++ templates and when you have compile time errors it's really hard to understand why. I'm not specially happy with this choice, but once you get your code right, it works pretty well.
Debugging:
gdb - pretty much of a standard. I haven't yet chosen an interface for it... Maybe I don't even need one. I find ddd look and feel really horrible! Maybe I will start using xcode interface to gdb for debugging. Not sure. Actually, 90% of the times I will identify the bug by making some prints and looking at the code, so I don't even run gdb.
Static code analysis:
cppcheck - this is a recent choice, but it seems to give some useful alerts.
Run-time code analysis:
valgrind - I'm not using it regularly yet, but it's on top of my priorities. This should be the ultimate tool to help you find memory leaks in your code. I didn't manage to install it in snow leopard, which can actually lead me to downgrade to leopard. Have to think about it.
Plotting:
gnuplot - really powerful and configurable. This one is a safe bet, although I heard there is nice python software as well.
Image Processing:
ImageMagick (convert command) - good to resize pictures, convert colors, etc. I mean, from the shell, this is not to replace gimp or the like.
Video Processing:
Here I should be using mplayer / mencoder from the command line, but again I still have to solve some compatibility problems with snow leopard. ffmpeg is also useful.
Terminal multiplexer:
screen - sometimes one needs to run experiments remotely, and you want your processes to continue running smoothly when you log off. Use screen for this.
Screen sharing:
synergy - I work directly on my macbook and I connect another screen to it. However, I also want to interact with my linux desktop at work. I use synergy to have an extended desktop, share the mouse and the keyboard across different computers over the network. It's really cool!
Automated backups:
Time Machine - I have an external hardisk which backs up pretty much everything automatically when I connect it to my macbook. Things in my desktop are backed up by a central procedure implemented in my research institute.
Update: I still use Time Machine in one computer, but now I rely more on cloud storage. I use Google Drive for some documents, PicasaWeb for pictures and use either Github or Bitbucket for source code or latex papers.
Shell tools:
cat, head, tail, cut, tr, grep, sort, uniq.... sometimes sed and awk...
I mostly use this to manipulate data files before feeding them to gnuplot and make some graphics.
Document preparation system:
latex - this is the standard in the scientific community and there are good reasons for that.
bibtex - to do proper citations to other people's articles or books.
Source code documentation:
doxygen - I don't really develop libraries for other people to use, but generating documentation automatically from your source code can help you improve it. If you use doxygen with graphviz you can for example see the class hierarchies and dependencies of your code.
What tools do you use? Do you have any recommendations for me? I guess that the OS, editor and programming language are the most polemic! But, what about the others? Any ideas?
Subscribe to:
Posts (Atom)