2009-04-23: Siwei Lyu (University of Albany,SUNY)

Computer Science Department, University of Albany, SUNY

Title: Reducing Statistical Dependencies in Natural Signals Using Radial Gaussianization Abstract: We consider the problem of transforming a signal to a representation in which the components are statistically independent. When the signal is generated as a linear transformation of independent Gaussian or non-Gaussian sources, the solution may be computed using a linear transformation (PCA, or ICA, respectively). Here, we examine a complementary case, in which the source is non-Gaussian but elliptically symmetric. In this situation, the source cannot be decomposed into independent components using a linear transform, but we show that a simple nonlinear transformation, which we call radial Gaussianization (RG), is able to remove all dependencies. We apply this methodology to natural signals, demonstrating that the joint distributions of bandpass filter responses, for both sound and images, are better described as elliptical than linearly transformed independent sources. Consistent with this, we demonstrate that the reduction in dependency achieved by applying RG to either pairs or blocks of bandpass filter responses is significantly greater than that achieved by PCA or ICA.

2009-04-22: Ping Li (Cornell University)

Department of Statistical Science, Cornell University

Title: ABC-Boost: Adaptive Base Class Boost for Multi-class Classification Abstract: The multinomial logit model is one of the popular models for solving multi-class classification problems. We develop a tree-based gradient boosting algorithm for fitting the multinomial logit model, which requires the selection of a base class. We propose adaptively and greedily choosing the base class at each boosting iteration. Our proposed algorithm is named abc-mart, where “abc” stands for “adaptive base class” and “mart” is a gradient boosting algorithm developed by Professor J. Friedman (2001). Our experiments demonstrate the improvement of abc-mart over mart on several public data sets.

2009-03-11: Juergen Schmidhuber (Swiss AI Lab IDSIA)

Swiss AI Lab IDSIA

Title: How to Learn a Program? Abstract: Based on several invited keynotes for international conferences. Numerous papers on this subject (1990-2008), plus overviews:
http://www.idsia.ch/~juergen/rnn.html
http://www.idsia.ch/~juergen/unilearn.html
http://www.idsia.ch/~juergen/goedelmachine.html
We will discuss novel ways of making robots and other agents smarter through machine learning algorithms. The focus will be on sequence learning as opposed to conventional pattern recognition. We will outline very general, asymptotically optimal problem solvers pioneered in our lab, as well as practical applications based on state-of-the-art adaptive feedback neural networks, with examples ranging from challenging control tasks to handwriting recognition and music composition.

2008-12-17: Jason Weston (NEC Laboratories America)

NEC Laboratories America, Princeton, NJ

Title: Supervised Semantic Indexing” and “Connecting Natural Language to the Non-linguistic World: The Concept Labeling Task ” Abstract: In this talk I will present two (not completely related) pieces of research in text processing/understanding.

The first part of the talk presents a class of models that are discriminatively trained to directly map from the word content in a query-document or document-document pair to a ranking score. Like latent semantic indexing (LSI), our models take account of correlations between words (synonymy, polysemy). However unlike LSI, our models are trained with a supervised signal directly on the task of interest, which we argue is the reason for our superior results. We provide an empirical study on Wikipedia documents, using the links to define document document or query-document pairs, where we obtain state-of-the-art performance using our method.

The second part of the talk presents a general framework and learning algorithm for a novel task termed concept labeling: each word in a given sentence has to be tagged with the unique physical entity (e.g. person, object or location) or abstract concept it refers to. We show how grounding language using our framework allows both world knowledge and linguistic information to be used seamlessly during learning and prediction. We show experimentally using a simulated environment of interactions between actors, objects and locations that world knowledge in our framework is indeed beneficial, without which ambiguities in language, such as word sense disambiguation and reference resolution, cannot be resolved.

Joint work with Bing Bai, Antoine Bordes, Nicolas Usunier, David Grangier and Ronan Collobert.

2008-12-03: Fei-Fei Li (Princeton University)

Department of Computer Science, Princeton University

Title: Human Motion Categorization & Detection Abstract: Detecting and categorizing human motion in unconstrained video sequences is an important problem in computer vision, potentially benefitting a large variety of applications such as video search and indexing, smart surveillance systems, video game interfaces, etc. In this talk, we focus on two questions: where are the moving humans in a moving sequence? and what motions are they performing? We propose two statistical models for human action categorization based on spatial and spatio-temporal local features: an unsupervised bag-of-words model for motion recognition, as well as a constellation-of-bags-of- features hierarchical model. In the second part of the talk, we present a fully automatic framework to detect and extract arbitrary human motion volumes from challenging real-world videos collected from YouTube.

2008-11-26: Alexander Berg (Columbia University)

Department of Computer Science, Columbia University

Title: Efficient Classification with IKSVMs and Extensions Abstract: I will discuss work making some kernelized SVMs efficient enough to apply to sliding window detection applications. This special case of a kernelized SVM can have accuracy significantly better than a linear classifier and can be evaluated exponentially faster than a general kernelized classifier.

Straightforward classification using kernelized SVMs requires evaluating the kernel for a test vector and each of the support vectors. For a class of kernels we show that one can do this much more efficiently. In particular we show that one can build histogram intersection kernel SVMs (IKSVMs) with runtime complexity of the classifier logarithmic in the number of support vectors as opposed to linear for the standard approach. We further show that by precomputing auxiliary tables we can construct an approximate classifier with constant runtime and space requirements, independent of the number of support vectors, with negligible loss in classification accuracy on various tasks. This approximation also applies to 1-Chi^2 and other kernels of similar form.

This result makes some kernelized SVMs fast enough for applications like sliding window detection, and extensions allow very fast learning.

http://acberg.com

2008-11-25: Daniel D Lee (University of Pennsylvania)

Dept. of Electrical and Systems Engineering, University of Pennsylvania

Title: Neural Correlates of Robot Algorithms Abstract: I will present some recent work on computational algorithms in robotics, and discuss whether they can perhaps provide insight into how the brain may accomplish similar tasks. In particular, I will discuss specific algorithms used for vision, motor control, localization, and navigation in robots. An overarching theme that emerges from these examples is the need to properly account for uncertainty and noise in the environment. I will show how current machine systems approach this problem, and hope to spur discussion about related processes among neurons and in the brain.

2008-11-12: Umar Syed (Princeton University)

Department of Computer Science, Princeton University

Title: Apprenticeship and Imitation Learning Abstract: In supervised learning, a learner receives training examples labeled by an expert. In reinforcement learning, the training signal is more diffuse, and consists of feedback in the form of rewards. In this talk, I'll discuss algorithms for learning problems that have features of both supervised and reinforcement learning. Specifically, I'll describe how one can leverage advice from an expert to learn an optimal policy in an environment where the true reward function is only partially known. An interesting consequence of our analysis is a novel performance guarantee for multiplicative weights algorithms. I'll also describe a method for modeling the behavior of a reward-seeking agent, by using information about the reward function to impose constraints on the space of parameters searched by an EM algorithm.

Joint work with Rob Schapire, Michael Bowling (U Alberta), and Jason Williams (AT&T)

2008-11-05: Leon Bottou (NEC Research Labs)

NEC Research Laboratories, Princeton, NJ

Title: Algebraic Structures for Deep Learning: a path to AI ?

2008-10-29: Ronan Collobert (NEC Research Labs)

NEC Research Laboratories, Princeton, NJ

Title: Large Scale Learning for Natural Language Processing Abstract: We describe a neural network architecture that given a sentence, outputs a host of language processing predictions: part-of-speech tags, chunks, named entity tags, semantic roles, semantically similar words and the likelihood that the sentence makes sense (grammatically and semantically) using a language model. The entire network is trained jointly on all these tasks using weight-sharing, an instance of multitask learning. All the tasks use labeled data except the language model which is learnt from unlabeled text and represents a novel way of performing semi-supervised learning for the shared tasks. We show how both multitask learning and semi-supervised learning improve the generalization of the shared tasks, resulting in a learnt model with state-of-the-art performance.

2008-10-28: Killian Weinberger (Yahoo Research)

Yahoo Research

Title: Large Margin Taxonomy Embedding with an Application to Document Categorization Abstract: Applications of multi-class classification, such as document categorization, often appear in cost-sensitive settings. Recent work has significantly improved the state of the art by moving beyond ..at. classification through incorporation of class hierarchies. We present a novel algorithm that goes beyond hierarchical classification and estimates the latent semantic space that underlies the class hierarchy. In this space, each class is represented by a prototype and classification is done with the simple nearest neighbor rule. The optimization of the semantic space incorporates large margin constraints that ensure that for each instance the correct class prototype is closer than any other. We show that our optimization is convex and can be solved efficiently for large data sets. Experiments on the OHSUMED medical journal data base yield state-of-the-art results on topic categorization.

2008-05-01: Tony Jebara (Columbia University)

Department of Computer Science, Columbia University

Title: Visualization & Matching for Graphs and Data Abstract: Given a graph between N high-dimensional nodes, can we faithfully visualize it in just a few dimensions? We present an algorithm that improves the state-of-the art in dimensionality reduction by extending the Maximum Variance Unfolding method. Visualizations are shown for social networks, species trees, image datasets and human activity.

If the connectivity between N nodes is unknown, can we link them to build a graph? The space to explore is daunting with 2^(N^2) choices but two interesting subfamilies are tractable: matchings and b-matchings. We place distributions over these families and recover the optimal graph or perform Bayesian inference over graphs efficiently using belief propagation algorithms. Higher order distributions over matchings can also be handled efficiently via fast Fourier algorithms. Applications are shown in tracking, network reconstruction, classification, and clustering.

2008-04-23 David Blei (Princeton University)

Department of Computer Science, Princeton University

Title: Supervised Topic Models (joint work with Jon McAuliffe) Abstract: A surge of recent research in machine learning and statistics has developed new techniques for finding patterns of words in document collections using hierarchical probabilistic models. These models are called “topic models” because the discovered word patterns often reflect the underlying topics that permeate the documents; however topic models also naturally apply to data such as images and biological sequences. The first part of this talk will describe the basic algorithmic and modeling issues in topic modeling.

In the second part of the talk, I will introduce supervised latent Dirichlet allocation (sLDA), a topic model of labelled documents that accomodates a variety of response types. I will derive a maximum-likelihood procedure for parameter estimation, which relies on variational approximations to handle intractable posterior expectations. Prediction problems motivate this research: I will present results on predicting movie ratings from the text of reviews, and predicting web-page popularity from summaries of their contents. I will report comparisons of sLDA to modern regularized regression, as well as versus an unsupervised LDA analysis followed by a separate regression.

2008-04-16: Michael Littman (Rutgers University)

Department of Computer Science, Rutgers University

Title: Efficient Model Learning for Reinforcement Learning Abstract: This talk addresses the problem of learning efficiently to make sequential decisions with a particular focus on generalizing experience without forfeiting formal learning-time guarantees. I'll summarize the theoretically motivated algorithms my group has been developing that exhibit practical advantages over existing learning algorithms. I'll also toss in some video footage of robots learning to move around by exploring efficiently.

2008-03-12 Jihun Hamm (U. Penn)

Grasp Laboratory, University of Pennsylvania

Title: Generative and discriminant learning approaches toward invariant object recognition Abstract: In this talk I will present generative and discriminative learning approaches toward invariant face/object recognition problems. Images of objects and human faces shows multiple sources of variations that make recognition problems challenging. Among the various sources the pose and the illumination changes are of our interest, since how these factors affect the appearance is relatively well understood.

In the first part of the talk, we first discuss a generative model of face images. In this model image variations due to illumination change are accounted for by a low-dimensional linear subspace, whereas variations due to pose change are approximated by a geometric transformation of images in the subspace. Priors for transformation can be derived without the knowledge of 3D models of the faces. This model can be efficiently learned via MAP estimation and multiscale registration techniques. Furthermore, we also show that the priors can also be used in a discriminant setting in the form of a regularizer. We demonstrate how to combine multiple invariances into Linear Discriminant Analysis and nonparametric Discriminant Analysis, as well as the kernelized versions of those.

In the second part of the talk, we will take a novel view on the problems involving linearly subspaces. By treating subspaces as basic elements of data, we can make learning algorithms adapt naturally to the problems with linearly invariant structures. We propose a unifying view on the subspace-based learning method by formulating the problems on the Grassmann manifold, which is the set of fixed-dimensional subspaces of a Euclidean space. We show feasibility of the approach by using the Grassmann kernel functions such as the Projection kernel and the Binet-Cauchy kernel. Experiments with real image databases show that the proposed method performs well compared with state-of-the-art algorithms.

If time allows, I will additionally address a dimensionality reduction method with invariance knowledge.

2008-03-05: CBLL Seminar: John Langford (Yahoo! Research)

TITLE: Learning without the Loss

In many natural situations, you can probe the loss (or reward) for one action, but you do not know the loss of other actions. This problem is simpler and more tractable than reinforcement learning, but still substantially harder than supervised learning because it has an inherent exploration component. I will discuss two algorithms for this setting.

(1) Epoch-greedy, which is a very simple method for trading off between exploration and exploitation. (2) Offset Tree, which is a method for reducing this problem to binary classification.

2007-04-18: CBLL Special Seminar: Geoffrey Hinton (Toronto)

RARE EVENT!! 2007-04-18: Wed April 18, 2007, 11:30 AM LOCATION: Room 1302, Warren Weaver Hall, Courant Institute, New York University, 251 Mercer Street, New York, NY, (NOTE: not the usual location)

TITLE: An efficient way to learn deep generative models

Geoffrey Hinton Canadian Institute for Advanced Research and University of Toronto.

SLIDES OF THE TALK: [PPT (4MB)] [PDF (4MB)] [DjVu (2MB)]

I will describe an efficient, unsupervised learning procedure for deep generative models that contain millions of parameters and many layers of hidden features. The features are learned one layer at a time without any information about the final goal of the system. After the layer-by-layer learning, a subsequent fine-tuning process can be used to significantly improve the generative or discriminative performance of the multilayer network by making very slight changes to the features.

I will demonstrate this approach to learning deep networks on a variety of tasks including: Creating generative models of handwritten digits and human motion; finding non-linear, low-dimensional representations of very large datasets; and predicting the next word in a sentence. I will also show how to create hash functions that map similar objects to similar addresses, thus allowing hash functions to be used for finding similar objects in a time that is independent of the size of the database.

2007-04-11: CBLL Seminar: Pradeep Ravikumar (CMU)

2007-04-11: Room 1221, 719 Broadway, Wed April 11, 11:30 AM

TITLE: Techniques for approximate inference and structure learning in Discrete Markov Random Fields

Markov random fields (MRFs), or undirected graphical models, are graphical representations of probability distributions. Each graph represents a family of distributions – the nodes of the graph represent random variables, the edges encode independence assumptions, and weights over the edges and cliques specify a particular member of the family.

Key inference tasks within this framework include estimating the normalization constant (also called the partition function), event probability estimation, and computing the most probable configuration. In addition, a key modeling task is to estimate the graph structure of the underlying MRF from data. In this talk, I'll give a high-level picture of these queries, and some of the methods we have developed to answer these queries.

Joint work with John Lafferty and Martin Wainwright.

2007-02-14: CBLL Seminar: Risi Kondor (Columbia)

Wednesday 02/14 at 11:30 AM. 715/719 Broadway, Room 1221 (12th floor). TITLE: A complete set of rotationally and translationally invariant features based on a generalization of the bispectrum to non- commutative groups

Risi Kondor, Columbia University

Deriving translation and rotation invariant representations is a fundamental problem in computer vision with a substantial literature. I propose a new set of features which

a, are simultaneously invariant to translation and rotation; b, are sufficient to reconstruct the original image with no loss (up to a badwidth limit); c, do not involve matching with a template image or any similar discontinuous operation.

The new features are based on Kakarala's generalization of the bispectrum to compact Lie groups and a projection onto the sphere. I validated the method on a handwritten digit recognition dataset with randomly translated and rotated digits.

2007-01-25: CBLL Seminar: Francis Bach (ENSMP)

Thursday January 25th, 11:30AM Room 1221 (12th floor), NYU, 715/719 Broadway,

TITLE: Image Classification with Segmentation Graph Kernels

Francis Bach, Ecole Nationale Superieure des Mines de Paris

The output of image segmentation is often represented by a labelled graph, each vertex corresponding to a segmented region, with edges joining neighboring regions. However, such rich representations of images have mostly remained underused for learning tasks, partly due to the observed instability of the segmentation process and the inherent difficulty of inexact graph matching or other graph mining problems with uncertain graphs. Recent advances in kernel-based methods have allowed to handle structured objects such as graphs by defining similarity measures via kernels, that can be used for many learning tasks such as classification with a support vector machine. In this paper, we propose a family of kernels between two segmentation graphs, each obtained by watershed transforms from the original images. Our kernels are based on soft matchings of subtree patterns of the respective graphs, leveraging the natural structure of images while remaining robust to the segmentation process uncertainty. Our family of kernels yields competitive performances on common image classification benchmarks. Moreover, by using kernels to compute similarity measures between images, we are able to take advantage of recent advances of kernel-based learning methods: semi-supervised learning allows to reduce the required number of labelled images, while multiple kernel learning algorithms efficiently select the most relevant kernels within the family for a particular learning task.

Joint work with Zaid Harchaoui.

2006-12-20: CBLL Seminar: Pierre Baldi (UCI)

Wednesday December 20, at 11:00 in room 1221, 715/719 Broadway, New York TITLE: Charting Chemical Space with Computers: Challenges and Opportunities for AI and Machine Learning

SPEAKER: Pierre Baldi, UC Irvine.

ABSTRACT: Small molecules with at most a few dozen atoms play a fundamental role in organic chemistry and biology. They can be used as combinatorial building blocks for chemical synthesis, as molecular probes for perturbing and analyzing biological systems, and for the screening/design/discovery of new drugs. As datasets of small molecules become increasingly available, it becomes important to develop computational methods for the classification and analysis of small molecules and in particular for the prediction of their physical, chemical, and biological properties.

We will describe datasets and machine learning methods, in particular kernel methods, for chemical molecules represented by 1D strings, 2D graphs of bonds, and 3D structures. We will demonstrate state-of-the-art results for the prediction of physical, chemical, or biological properties including the prediction of toxicity and anti-cancer activity and the applications of these methods to the discovery of new drug leads. More broadly, we will discuss some of the challenges and opportunities for computer science, AI, and machine learning in chemistry.

2006-06-23: Wolf Kinzle (Max Planck Institute)

June 23rd, 2:30PM, 715 Broadway 12th floor conference room TITLE: Learning an interest point detector from human eye movements

W. Kienzle, F.A. Wichmann, B. Schoelkopf, and M.O. Franz

The talk is about learning an interest point detector (saliency map) from human eye movement statistics. Instead of modelling biologically plausible image features (edge, blob, corner filters, etc.), we simply train a classifier on pixel values of fixated vs. randomly selected image patches. Thus, the learned function provides a measure of interestingness, but without being biased towards plausible but possibly misleading biological assumptions. We describe the data collection, training, and evaluation process, and show that our learned saliency measure significantly accounts for human eye movements. Furthermore, we illustrate connections to existing interest operators, and present a multi-scale interest point detector based on the learned function.

2006-04-20: Brendan Frey (Toronto)

Time: Thursday, April 20, 2006 at 11:00AM, Place: 719 Broadway, Room 1221

TITLE: Affinity propagation for combined bottom-up and top-down clustering

Brendan J. Frey, University of Toronto

Clustering is a critical task in the analysis of scientific data and in natural or artificial sensory processing. Existing techniques either are bottom-up and make pair-wise decisions when linking together training cases, or are top-down and represent each cluster using a parametric model, while alternately assigning training cases to clusters and updating parameters. I'll describe an algorithm that we call `affinity propagation', which for the first time combines complementary advantages of these distinct approaches. Affinity propagation can use sophisticated cluster models, but operates by propagating real-valued messages between pairs of training cases. Because affinity propagation replaces the estimation of model parameters with a step that considers many potential models and many possible cluster assignments, it can find better solutions than strictly bottom-up or top-down methods.

Work done in collaboration with Delbert Dueck, University of Toronto.

2006-03-11: Boris Epshtein (Weizman Institute)

Time: Wednesday, March 15, 2006 at 3:00PM, 719 Broadway, Room 1221

TITLE: Visual classification by a hierarchy of semantic fragments

Boris Epshtein, Weizmann Institute

We describe visual classification by a hierarchy of semantic fragments. In fragment-based classification, objects within a class are represented by common sub-structures selected during training. Here we propose two extensions to the basic fragment-based scheme. The first extension is the extraction and use of feature hierarchies. We describe a method that automatically constructs complete feature hierarchies from image examples, and show that features constructed hierarchically are significantly more informative and better for classification compared with similar non-hierarchical features. The second extension is the use of so-called semantic fragments to represent object parts. The goal of a semantic fragment is to represent the different possible appearances of a given object part. The visual appearance of such object parts can differ substantially, and therefore traditional image similarity-based methods are inappropriate for the task. We show how the method can automatically learn the part structure of a new domain, identify the main parts, and how their appearance changes across objects in the class. We discuss the implications of these extensions to object classification and recognition.

Joint work with Prof. Shimon Ullman.

2005-10-20: Sebastian Seung (MIT)

2005-10-20: Room 1221, 719 Broadway, Thursday Feb 10, 12:00PM Sebastian Seung Brain and Cognitive Science Dept, MIT

TITLE: Representing part-whole relationships in recurrent networks

There is much debate about the computational function of top-down synaptic connections in the visual system. Here we explore the hypothesis that top-down connections, like bottom-up connections, reflect part-whole relationships. We analyze a recurrent network with bidirectional synaptic interactions between a layer of neurons representing parts and a layer of neurons representing wholes. Within each layer, there is lateral inhibition. When the network detects a whole, it can rigorously enforce part-whole relationships by ignoring parts that do not belong. The network can complete the whole by filling in missing parts. The network can refuse to recognize a whole, if the activated parts do not conform to a stored part-whole relationship. Parameter regimes in which these behaviors happen are identified using the theory of permitted and forbidden sets. The network behaviors are illustrated by recreating Rumelhart and McClelland's ``interactive activation'' model. (joint work with Viren Jain and Valentin Zhigulin)

2005-05-02: Jean Ponce (UIUC)

2005-05-02: Room 1221, 719 Broadway, Thursday Feb 10, 2:00PM Jean Ponce Beckman Institute, UIUC

TITLE: 3D Photography

This talk addresses the problem of automatically acquiring three-dimensional object and scene models from multiple pictures, a process known as 3D photography. I will introduce a relative of Chasles' absolute conic, the absolute quadratic complex, and discuss its applications to the calibration of cameras with rectangular or square pixels without the use of calibration charts. I will also present a novel algorithm that uses the geometric and photometric constraints associated with multiple calibrated photographs to construct high-quality solid models of complex 3D objects in the form of carved visual hulls. If time permits, I will also briefly discuss our most recent results on category-level object recognition.

Joint work with Yasutaka Furukawa, Svetlana Lazebnik, Kenton McHenry, Theo Papadopoulo, Cordelia Schmid, Monique Teillaud and Bill Triggs.

2005-02-10: CBLL Seminar: Larry Carin (Duke)

2005-02-10: Room 1221, 719 Broadway, Thursday Feb 10, 11:00AM Larry Carin, Duke University

TITLE: Application of Active Learning and Semi-Supervised Techniques in Adaptive Sensing

In sensing problems one typically has a small quantity of labeled data and a large quantity of unlabeled data we must characterize. In addition, when sensing we often have access to much of the unlabeled data simultaneously. This therefore affords the opportunity to employ semi-supervised classification algorithms, designed based on all available information, i.e., based on all labeled and unlabeled data. In addition, to augment the small quantity of labeled data, with the goal of reducing classification risk, one may employ active learning. In this context active learning may be manifested by acquiring labels on a small subset of the unlabeled data, with the examples chosen for labeling based on information-theoretic metrics. Moreover, active learning may also be employed in a multi-sensor setting, in which rather than acquiring labels we acquire new multi-sensor data, with properly tailored sensors and sensor waveforms. In this talk the basic ideas of active and semi-supervised learning are discussed in the context of sensing. We also discuss the utility of new machine learning technology for the sensing problem, such as variational Bayes inference. The ideas are demonstrated using several examples of measured multi-sensor data.

2005-02-09: CBLL Seminar: John Langford (TTI-C)

2005-02-09: Room 1221, 719 Broadway, Wednesday, Feb. 9th, 3:30pm John Langford, Toyota Technological Institute, Chicago

TITLE: Cost Sensitive Classification with Binary Classification

Cost sensitive classification is the problem of making a choice from an arbitrary set so as to minimize the cost of the choice. Binary classification is the problem of making a single correct binary prediction.

Cost sensitive classification can be reduced to binary classification in such a way that a small regret (= error rate above the minimum error rate) on the created binary classification problems implies a small regret on the cost sensitive classification problems.

This implies that a binary classifier can hope to solve (essentially) any learning problem with any bounded loss function. It also implies that any consistent binary classifier can be made into a consistent multiclass classifier.

John Langford will explain how this reduction works.

2005-02-04: CNS Seminar: Alex Pouget (U. Rochester)

2005-02-04: CNS building, Rm 815, 1:00 PM (special presentation at the usual Vision Journal Club time) Alex Pouget, Department of Brain and Cognitive Sciences, University of Rochester

Recent psychophysical experiments indicate that humans use approximate Bayesian inference in a wide variety of tasks, ranging from cue integration to decision making to motor control. This implies that neurons both represent probability distributions and combine those distributions according to a close approximation to Bayes rule. We will demonstrate how such Bayesian inference can be implemented in the dynamics of recurrent analog circuits using cue integration as an example. We will also present recent recordings showing that the receptive field of multisensory neurons in area VIP are consistent with the predictions of our model. We will end by discussing our recent attempt to generalize this approach to network of spiking neurons.

2004-03-24: Guest Lecture: Lawrence Saul (U. Penn)

2004-03-24: WWH 101, 5:00PM: Guest Lecture by Lawrence Saul, University of Pennsylvania. TITLE: unsupervised learning, dimensionality reduction, and non-linear embedding.

More information about L. Saul and his work is available at http://www.cis.upenn.edu/~lsaul

2004-03-12: Brendan Frey (Toronto)

2004-03-12: WWH 1314, 3:00PM: Brendan J. Frey, University of Toronto TITLE: Learning the “Epitome” of an Image

I will describe a new model of image data that we call the “epitome”. The epitome of an image is its miniature, condensed version containing the essence of the textural and shape properties of the image. As opposed to previously used simple image models, such as templates or basis functions, the size of the epitome is considerably smaller than the size of the image or object it represents, but the epitome still contains most constitutive elements needed to reconstruct the image. A collection of images often shares an epitome, e.g., when images are a few consecutive frames from a video sequence, or when they are photographs of similar objects. A particular image in a collection is defined by its epitome and a smooth mapping from the epitome to the image pixels. When the epitome model is used within a hierarchical generative model, appropriate inference algorithms can be derived to extract epitomes from a single image or a collection of images and at the same time perform various inference tasks, such as image segmentation, motion estimation, object removal, super-resolution and image denoising.

Go to http://research.microsoft.com/~jojic/epitome.htm for a sneak preview.

Joint work with Nebojsa Jojic and Anitha Kannan.

2004-03-04: Seminar: Jean Ponce (UIUC)

2004-03-04: 575 Broadway, Room 1221, 4:00PM: Jean Ponce, Beckman Institute and Department of Computer TITLE: Toward True 3D Object Recognition

This talk addresses the problem of recognizing three-dimensional (3D) objects in photographs and image sequences, revisiting viewpoint invariants as a -local- representation of shape and appearance. The key insight is that, although smooth surfaces are almost never planar in the large, and thus do not (in general) admit global invariants, they are always planar in the small—that is, sufficiently small surface patches can always be thought of as being comprised of coplanar points—and thus can be represented locally by planar invariants. This is the basis for a new, unified approach to object recognition where object models consist of a collection of small (planar) patches, their invariants, and a description of their 3D spatial relationship. I will illustrate this approach with two fundamental instances of the 3D object recognition problem: (1) modeling rigid 3D objects from a small set of unregistered pictures and recognizing them in cluttered photographs taken from unconstrained viewpoints; and (2) representing, learning, and recognizing non-uniform texture patterns under non-rigid transformations. I will also discuss extensions to the analysis of video sequences and the recognition of object categories. If time permits, I will conclude with a brief presentation of our recent work on 3D photography.

Joint work with Svetlana Lazebnik, Frederick Rothganger, and Cordelia Schmid.

seminars/cbll_seminars_since_2004.txt · Last modified: 2010/03/08 12:29 by koray