Tuesday, December 8, 2009
Optimism as Artificial Intelligence Pioneers Reunite
Just a short link to an article of the New York Times about AI.
In 1978, Dr. McCarthy wrote, “human-level A.I. might require 1.7 Einsteins, 2 Maxwells, 5 Faradays and .3 Manhattan Projects.”
I think we probably have the genius scientists around, but not so sure about the 0.3 Manhattan Projects!
Update: You might also want to read latest Shane Legg's predictions about human level artificial intelligence.
Monday, December 7, 2009
TEDx Geneva
Today I assisted to the first edition of TEDx Geneva. This was a locally-organized event following the same spirit of the original TED talks: "ideas worth spreading".
I think the program was really good, because in this region there are some many incredible organizations. He could listen to people from CERN, EPFL, the United Nations, the Red Cross and some independent Swiss adventurers and entrepreneurs. We also had the opportunity to (re)watch some videos of the most popular TED talks recorded in the US.
I think the program was really good, because in this region there are some many incredible organizations. He could listen to people from CERN, EPFL, the United Nations, the Red Cross and some independent Swiss adventurers and entrepreneurs. We also had the opportunity to (re)watch some videos of the most popular TED talks recorded in the US.
All the speakers spoke in English, which in my opinion degraded the level of their presentations, simply because it's not their native language. Even if one is relatively fluent, it's always harder to make jokes and be entertaining. The event was also a bit too long, covering the full day.
Still, I greatly appreciated the experience and recommend it to others!
Wednesday, November 11, 2009
Choosing my tools
I'm doing research in the fields of Machine Learning and Computer Vision, so each time we have an idea for a new algorithm, I have to write code, run experiments and compare results. I have realized that the experimental part is really the bottleneck, we have more ideas than we can test. For this reason, it's critical to chose a good set of tools you can work with. This is a list of my current choices, but I am continuously looking for more efficient tools.
Operating system:
Snow Leopard - In my opinion, Mac OS X has an excellent balance between control and usability. You have beautiful graphical interfaces, that just work, but still have a fully functional Unix shell.
Update: Lately my preference is to use Ubuntu Linux because I have much fewer problems with apt-get than with macports. Sometimes, professionally I also use Windows. It seems that is hard to stick to one OS, when you change project, job, etc.
Text Editor / Programming Environment:
Textmate - again, it's an excellent compromise between simplicity, usability and customizability. You can create your own code snippets (using shell commands, ruby, python and more), but to me it seems much easier to learn than vim or emacs.
Update: Again, went back to the basics, and started using vim and gvim. It is available in all the platforms, there is a much bigger user base and I really like the power of the command mode. In addition, recently I learnt how to write simple vi plugins using python, which literally means I can do whatever I want with my editor.
Programming Language:
C++ - absolute power. So powerful that one must be very careful using it. Some people say, C++ is actually a federation of languages, which includes C, object oriented stuff, templates and standard libraries. Although I've been using it for while, I feel there is always more to learn about it.
Update: In addition to C++ (and C which I really love), I also started using some scripting languages. First I learnt Lua, so that I could use the Torch Machine Learning Library. Then, I started using python, which I really love due to the wide availability of (easily installable) libraries. Ah, I look forward to learn the new C++11 standard, which seems to be quite neat.
Update: In addition to C++ (and C which I really love), I also started using some scripting languages. First I learnt Lua, so that I could use the Torch Machine Learning Library. Then, I started using python, which I really love due to the wide availability of (easily installable) libraries. Ah, I look forward to learn the new C++11 standard, which seems to be quite neat.
Build System (new):
cmake - it's cross platform and simple enough to start using it. I don't know the advanced features, but it's pretty easy to create a project that generates libraries and executables and links properly with other dependencies (like OpenCV).
Source control system:
git - I was using subversion before, but I guess the idea of distributed repositories makes sense. You can work locally and still commit changes that you can synchronize later. So far, I use less than 2% of the commands!
Update: git is definitely here to stay. Now I use private and public hosted repositories with Github or Bitbucket.
Cloud Computing (new):
Amazon EC2 - I also used the IBM Smart Cloud, but Amazon has more features and better APIs. Recently, with the introduction of the spot instances, things also got a lot cheaper when you need to process large amounts of data.
NoSQL Databases (new):
redis - redis is what we can call a "data structure server" and it's probably the nicest piece of software I started using recently. It is just beautiful. Simple. Intuitive. Fast. I can not recommend it enough.
Computer Vision Library:
OpenCV - it's quite useful for the low and intermediate level things (load and save images, convert color spaces, edge detection, SURF descriptors etc.). It also has higher level algorithms, but when you're doing research in the field, these are not so useful. It lacks some object-oriented design, but version 2.0 is starting to move in that direction.
Machine Learning library:
None. Here I'm re-inventing the wheel, because I want to know everything about wheels. I do my own implementations of AdaBoost, EM algorithm, Kmeans and stuff like that. For a nice discussion of code re-use in the machine learning domain, read this discussion at mloss.org
Object Serialization Library:
boost-serialization - I need to save the models to files in order to load them later. If I were using OpenCV for Machine Learning, I could also use the functions they provide for serialization, but I'm not. With boost I can serialize objects to xml or binary format. It's a bit tricky to use, because it uses C++ templates and when you have compile time errors it's really hard to understand why. I'm not specially happy with this choice, but once you get your code right, it works pretty well.
Debugging:
gdb - pretty much of a standard. I haven't yet chosen an interface for it... Maybe I don't even need one. I find ddd look and feel really horrible! Maybe I will start using xcode interface to gdb for debugging. Not sure. Actually, 90% of the times I will identify the bug by making some prints and looking at the code, so I don't even run gdb.
Static code analysis:
cppcheck - this is a recent choice, but it seems to give some useful alerts.
Run-time code analysis:
valgrind - I'm not using it regularly yet, but it's on top of my priorities. This should be the ultimate tool to help you find memory leaks in your code. I didn't manage to install it in snow leopard, which can actually lead me to downgrade to leopard. Have to think about it.
Plotting:
gnuplot - really powerful and configurable. This one is a safe bet, although I heard there is nice python software as well.
Image Processing:
ImageMagick (convert command) - good to resize pictures, convert colors, etc. I mean, from the shell, this is not to replace gimp or the like.
Video Processing:
Here I should be using mplayer / mencoder from the command line, but again I still have to solve some compatibility problems with snow leopard. ffmpeg is also useful.
Terminal multiplexer:
screen - sometimes one needs to run experiments remotely, and you want your processes to continue running smoothly when you log off. Use screen for this.
Screen sharing:
synergy - I work directly on my macbook and I connect another screen to it. However, I also want to interact with my linux desktop at work. I use synergy to have an extended desktop, share the mouse and the keyboard across different computers over the network. It's really cool!
Automated backups:
Time Machine - I have an external hardisk which backs up pretty much everything automatically when I connect it to my macbook. Things in my desktop are backed up by a central procedure implemented in my research institute.
Update: I still use Time Machine in one computer, but now I rely more on cloud storage. I use Google Drive for some documents, PicasaWeb for pictures and use either Github or Bitbucket for source code or latex papers.
Shell tools:
cat, head, tail, cut, tr, grep, sort, uniq.... sometimes sed and awk...
I mostly use this to manipulate data files before feeding them to gnuplot and make some graphics.
Document preparation system:
latex - this is the standard in the scientific community and there are good reasons for that.
bibtex - to do proper citations to other people's articles or books.
Source code documentation:
doxygen - I don't really develop libraries for other people to use, but generating documentation automatically from your source code can help you improve it. If you use doxygen with graphviz you can for example see the class hierarchies and dependencies of your code.
What tools do you use? Do you have any recommendations for me? I guess that the OS, editor and programming language are the most polemic! But, what about the others? Any ideas?
Sunday, November 1, 2009
Open PhD and Postdoc positions
My supervisor is leading a new European project called MASH, which stands for "Massive Sets of Heuristics". There are open positions here in Switzerland, as well as in France, Germany and Czech Republic.
The goal is to solve complex vision and goal planning problems in a collaborative way. It will be tested in 3D video games and also in a real robotic arm. Collaborators will submit pieces of code (heuristics) that can help the machine solving the problem at hand. In the background, machine learning algorithms will be running to choose the best heuristics.
If you are interested in: probabilities, applied statistics, information theory, signal processing, optimization, algorithms and C++ programming, you might consider applying!
Wednesday, October 14, 2009
Gmail Machine Learning
I just quickly tried the new Gmail Labs feature "Got the wrong Bob"? and it actually works quite nicely! I put some email addresses of family members, followed by the address of an old professor, who has the same first name of one of my cousins, and... Gmail found it! :) It suggested right way to change to the correct person, based on context!
The other new feature, called "Don't forget Bob", is probably simpler, but quite useful as well. As I typed names of some close friends, I got more suggestions of friends I often email jointly with the previous ones.
I wonder if the models to run this feature are very complicated. Probably they are not. I guess one just has to estimate the probability of each email address in our contacts to appear in the "To:" field, given the addresses we have already typed. To estimate these, you just have to use a frequentist approach and count how many times this happened in the past. With this in hands, "Got the wrong Bob?" will notice unlikely email addresses and "Don't forget Bob" will suggest likely ones that are missing.
I think it's a really cool idea, in the same spirit of "Forgotten Attachement Detector". A bit of machine learning helping daily life!
Monday, October 5, 2009
Schools kill creativity
My good friend Miguel called my attention to a TED talk that you might also find interesting:
Ken Robinson argues that "schools kill creativity", because kids are not given the chance to discover their interests and talents. Since very soon, students get a negative reward for making mistakes, which makes them too risk averse. He goes further, saying that the educational system is built to create university professors, leaving the majority of the students behing along the way. More space should be given to other forms of expressing intelligence, such as the arts or sports.
I strongly recommend this video. Besides the interest of the subject, the presentation is actually quite funny, it somehow resembles a British-style stand-up comedy!
Sunday, August 2, 2009
(My) ideal society
Each individual is respected as such and has the freedom and the means to pursue its own interests without having to harm the others.
Don't know how it looks like. It's a pretty simple (non-constructive) definition, however.
I'm sure mathematicians like it!
Read more at my webpage:
http://hpenedones.googlepages.com/thoughtsonlife
Note: This essay will be in beta version, longer than any Google product.
Don't know how it looks like. It's a pretty simple (non-constructive) definition, however.
I'm sure mathematicians like it!
Read more at my webpage:
http://hpenedones.googlepages.com/thoughtsonlife
Note: This essay will be in beta version, longer than any Google product.
Wednesday, July 22, 2009
Personal productivity, happiness and optimization algorithms
I spend lots of time wondering about the best ways to be both more productive and happy. Curiously, I'm coming to the conclusion that this is exactly what I should not do.
Being productive, like being happy, requires living the present moment, not thinking about it.
If you want to complete a task, the best strategy is just doing it! You might start by setting up a plan, a sequence of smaller actions that lead you to your goal, but once you have this, just do it. Spending too much energy re-planning and judging yourself along the way is just counter-productive.
Curiously, this is not easy! Our brain seems to have some bad habits hard-wired. Want it or not, we start thinking about the past or making predictions about the future. Worse, we start multi-tasking (as you read this blog, you might also be listening to music, doing some work, or chatting with your friends in facebook)
Perhaps the only solution is to re-train our neuron connections. One way to do it would be meditating or repeatedly performing a task that requires one to be focused on the present. Feeling, not thinking. After enough practicing, the brain should start rewiring.
I recently came across this famous Hemingway sentence:
Perhaps intelligent people have the tendency to plan too much? Planning involves predicting the reward associated with a set of possible actions and choosing the best ones. What if the reward function is not easily predictable? Perhaps the best optimization algorithm in this case is a greedy one. Don't plan to be happy only next year or next month or even tomorrow. You are dealing with a real-time multi-agent system, you have only partial and noisy data about the world, the system is recursive, and finding the optimal reward is probably NP-hard-as-it-can-be!
Being productive, like being happy, requires living the present moment, not thinking about it.
If you want to complete a task, the best strategy is just doing it! You might start by setting up a plan, a sequence of smaller actions that lead you to your goal, but once you have this, just do it. Spending too much energy re-planning and judging yourself along the way is just counter-productive.
Curiously, this is not easy! Our brain seems to have some bad habits hard-wired. Want it or not, we start thinking about the past or making predictions about the future. Worse, we start multi-tasking (as you read this blog, you might also be listening to music, doing some work, or chatting with your friends in facebook)
Perhaps the only solution is to re-train our neuron connections. One way to do it would be meditating or repeatedly performing a task that requires one to be focused on the present. Feeling, not thinking. After enough practicing, the brain should start rewiring.
I recently came across this famous Hemingway sentence:
“Happiness in intelligent people is the rarest thing I know.”
Perhaps intelligent people have the tendency to plan too much? Planning involves predicting the reward associated with a set of possible actions and choosing the best ones. What if the reward function is not easily predictable? Perhaps the best optimization algorithm in this case is a greedy one. Don't plan to be happy only next year or next month or even tomorrow. You are dealing with a real-time multi-agent system, you have only partial and noisy data about the world, the system is recursive, and finding the optimal reward is probably NP-hard-as-it-can-be!
Increasing the scope
In the past it happened that I didn't publish some potentially interesting thoughts in this blog, just because they didn't exactly fit the "about intelligence" topic.
I'm fed up of this self-imposed censorship. In the future the scope will be broader.
I'm fed up of this self-imposed censorship. In the future the scope will be broader.
Wednesday, May 6, 2009
Machine Learning to AI
John Langford wrote a very interesting post on the failures of Artificial Intelligence research and why Machine Learning has been a safer bet. Read it here.
Wednesday, April 1, 2009
Google CADIE vs Wolfram Alpha
Google already has a tradition of April fool's jokes: this year they are introducing an Artificial Intelligence brain!
They describe the development process of their so called CADIE : Cognitive Autoheuristic Distributed-Intelligence Entity like this:
"For several years now a small research group has been working on some challenging problems in the areas of neural networking, natural language and autonomous problem-solving. Last fall this group achieved a significant breakthrough: a powerful new technique for solving reinforcement learning problems, resulting in the first functional global-scale neuro-evolutionary learning cluster."
Remember, this is an April fool's hoax. But now compare it with Wolfram's announcement of the new Wolfram Alpha:
"I wasn’t at all sure it was going to work. But I’m happy to say that with a mixture of many clever algorithms and heuristics, lots of linguistic discovery and linguistic curation, and what probably amount to some serious theoretical breakthroughs, we’re actually managing to make it work."
I find them quite similar! ;)
Now more seriously: I don't doubt Wolfram Alpha will have interesting features, but please don't try to sell it like the ultimate AI search engine. By the way, Daniel Tunkelang has a recent and well informed post on this topic.
Update: Indeed this sneak preview of Wolfram Alpha shows some cool features! In the meanwhile Google also gave some steps in the direction of better public data/statistics visualization.
They describe the development process of their so called CADIE : Cognitive Autoheuristic Distributed-Intelligence Entity like this:
"For several years now a small research group has been working on some challenging problems in the areas of neural networking, natural language and autonomous problem-solving. Last fall this group achieved a significant breakthrough: a powerful new technique for solving reinforcement learning problems, resulting in the first functional global-scale neuro-evolutionary learning cluster."
Remember, this is an April fool's hoax. But now compare it with Wolfram's announcement of the new Wolfram Alpha:
"I wasn’t at all sure it was going to work. But I’m happy to say that with a mixture of many clever algorithms and heuristics, lots of linguistic discovery and linguistic curation, and what probably amount to some serious theoretical breakthroughs, we’re actually managing to make it work."
I find them quite similar! ;)
Now more seriously: I don't doubt Wolfram Alpha will have interesting features, but please don't try to sell it like the ultimate AI search engine. By the way, Daniel Tunkelang has a recent and well informed post on this topic.
Update: Indeed this sneak preview of Wolfram Alpha shows some cool features! In the meanwhile Google also gave some steps in the direction of better public data/statistics visualization.
Saturday, March 28, 2009
Machine Learning artwork
Today I tried out a great site to generate tag clouds, it is called wordle.net. I rendered some images just by copy-pasting the text from wikipedia about machine learning.
The results were pretty cool and I guess one could print awesome t-shirts with them. What do you say?
The results were pretty cool and I guess one could print awesome t-shirts with them. What do you say?
This one became officially my computer wallpaper:
Wednesday, March 18, 2009
ACM Paris Kanellakis Theory and Practice Award 2008
The 2008 ACM Paris Kanellakis Theory and Practice Award was awarded to Corinna Cortes and Vladimir Vapnik "for the development of Support Vector Machines, a highly effective algorithm for classification and related machine learning problems".
It's not the first time this award is given to Machine Learning people. In 2004 it was awarded to Yoav Freund and Robert Schapire "for the development of the theory and practice of boosting and its applications to machine learning."
I found a bit weird that they left Bernhard Boser and Isabelle Guyon out of the prize, because they were Vapnik's co-authors in the 1992 paper "A training algorithm for optimal margin classifiers", which I guess is considered to be the first paper on Support Vector Machines...
Anyway, congratulation to the winners. These are indeed elegant algorithms with sound theoretical foundations and numerous sucessful applications to vision, speech, natural language and robotics, to name just a few.
---------------------------
Remarks:
Thanks to my cousin Rui for the link to this news.
---------------------------
Related post:
Vapnik's picture explained.
It's not the first time this award is given to Machine Learning people. In 2004 it was awarded to Yoav Freund and Robert Schapire "for the development of the theory and practice of boosting and its applications to machine learning."
I found a bit weird that they left Bernhard Boser and Isabelle Guyon out of the prize, because they were Vapnik's co-authors in the 1992 paper "A training algorithm for optimal margin classifiers", which I guess is considered to be the first paper on Support Vector Machines...
Anyway, congratulation to the winners. These are indeed elegant algorithms with sound theoretical foundations and numerous sucessful applications to vision, speech, natural language and robotics, to name just a few.
---------------------------
Remarks:
Thanks to my cousin Rui for the link to this news.
---------------------------
Related post:
Vapnik's picture explained.
Friday, February 6, 2009
Social features on this blog
The readers of this blog can now:
1. Easily subscribe to the RSS feed with their reader of choice [left panel].
2. Decide to become a visible "follower" of this blog [left panel].
3. Rate each blog entry from 1 to 5 stars [end of each post].
I would be particularly happy to see people rating the posts. It's less informative than writing comments but still it's very good feedback for me.
Thanks!
1. Easily subscribe to the RSS feed with their reader of choice [left panel].
2. Decide to become a visible "follower" of this blog [left panel].
3. Rate each blog entry from 1 to 5 stars [end of each post].
I would be particularly happy to see people rating the posts. It's less informative than writing comments but still it's very good feedback for me.
Thanks!
Wednesday, January 28, 2009
Vapnik's picture explained
This is an extremely geek picture! :) Let's try to explain it:
First of all, as many of you know, the gentleman in the picture is Prof. Vladimir Vapnik. He is famous for his fundamental contributions to the field of Statistical Learning Theory, such as the Empirical Risk Minimization (ERM) principle, VC-dimension and Support Vector Machines.
Then we notice the sentence in the board: it resembles the famous "All your base are belong to us"! This is a piece of geek culture that emerged after a "broken English" translation of a Japanese video game for Sega Mega Drive .
Wait, but they replaced the word "Base" by "Bayes"!?
Yes, that Bayes, the British mathematician known for the Bayes' theorem.
Okay, seems fair enough, we are dealing with people from statistics...
By the moment we think things can not get more geeky, we realize there is scary inequality written on the top of the white board:
My goodness, what's this?! Okay, that's when things get really technical:
This is a probabilistic bound for the expected risk of a classifier under the ERM framework. In simple terms, it relates the classifier's expected test error with the training error on a dataset of size l and in which the cardinality of the set of loss functions is N.
If I'm not mistaken, the bound holds with probability (1 - eta) and applies only to loss functions bounded above by 1.
Sweet! Now that we got the parts, what's the big message?
Well, it's basically a statement about the superiority of Vapnik's learning theory over the Bayesian alternative. In a nutshell, the Bayesian perspective is that we start with some prior distribution over a set of hypothesis (our beliefs) and we update these according to the data that we see. We then look for an optimal decision rule based on the posterior distribution.
On the other hand, in Vapnik's framework there are no explicit priors neither we try to estimate the probability distribution of the data. This is motivated by the fact that density estimation is a ill-posed problem, and therefore we want to avoid this intermediate step. The goal is to directly minimize the probability of making bad decision in the future. If implemented through Support Vector Machines, this boils down to finding the decision boundary with maximal margin to separate the classes.
And that's it, folks! I hope you had fun decoding this image! :)
Computer Vision vs Computer Graphics
If I had to explain what computer vision is all about, in just one snapshot, I would show you this:
Computer Graphics algorithms go from the parameter space to the image space (rendering), computer vision algorithms do the opposite (inverse-rendering). Because of this, computer vision is basically a (very hard) problem of statistical inference.
The common approach nowadays is to build a classifier for each kind of object and then search over (part of) the parameter space explicitly, normally by scanning the image for all possible locations and scales. The remaining challenge is still huge: how can a classifier learn and generalize, from a finite set of examples, what are the fundamental characteristics of an object (shape, color) and what is irrelevant (changes in illumination, rotations, translations, occlusions, etc.).
This is what is keeping us busy! ;)
PS - Note that changes in illumination induce apparent changes in the color of the object and rotations induce apparent changes in shape!
Computer Graphics algorithms go from the parameter space to the image space (rendering), computer vision algorithms do the opposite (inverse-rendering). Because of this, computer vision is basically a (very hard) problem of statistical inference.
The common approach nowadays is to build a classifier for each kind of object and then search over (part of) the parameter space explicitly, normally by scanning the image for all possible locations and scales. The remaining challenge is still huge: how can a classifier learn and generalize, from a finite set of examples, what are the fundamental characteristics of an object (shape, color) and what is irrelevant (changes in illumination, rotations, translations, occlusions, etc.).
This is what is keeping us busy! ;)
PS - Note that changes in illumination induce apparent changes in the color of the object and rotations induce apparent changes in shape!
Thursday, January 8, 2009
Stationary Features - Google Tech Talk
François Fleuret, my PhD advisor, recently gave a talk about object detection at Google (Zurich offices).
You can now see it online:
If you wonder where my research will try to extend the work done so far, just go to minute 45:30!
You can now see it online:
If you wonder where my research will try to extend the work done so far, just go to minute 45:30!
Subscribe to:
Posts (Atom)