Multi-Res Modeling Group People Research Demos
Seminars
Courses
Teaching
Publications Software
Multi-Res Modeling Group Multi-Res Modeling Group


We often invite both local and global speakers to speak as part of our computer graphics seminar series. Past visitors include Pat Hanrahan, Marshall Bern, Jason Mitchell, Chas Boyd, Bill Mark, Jonathan Shewchuk, Matthew Papakipos, Steven Gortler, Eric Veach, Wook, Ken Musgrave, John Hughes, Leif Kobbelt, Bernd Frölich, among others. The most recent visitors are listed below.

Pat Hanrahan talk announcement

Pat Hanrahan, Department of Computer Science, Stanford University, Monday, November 25th 2002, 12:30pm - 2pm, 123 Lauritsen;Streaming Scientific Computing on GPUs;Graphics processors are very fast and powerful. A modern GPU has 8 parallel programmable pipelines running 4-way vector floating point instructions and achieves over 50 GFLOPs. Graphics processors use VLSI resources efficiently because they use data parallelism and carefully
orchestrate communication between processors. For these reasons, over
the last two decades, graphics processor performance has been increasing
significantly faster than microprocessors, with an annual increase in
performance rate of over 2.4.

A natural question to ask is whether other important algorithms can
take advantage of the techniques used to achieve high performance in
graphics processors. The Stanford Streaming Supercomputer project,
co-led with Bill Dally, is investigating this approach. In this talk
I will describe our approach to the problem. First, we abstract the
processor as a stream processor. Second, we have developed a high-level
programming environment for stream processing called Brook. Brook will
run a variety of architectures, including next generation GPUs. Finally, we
have evaluated three applications: molecular dynamics, Galerkin finite
element codes on triangular meshes, and a multigrid-based fluid flow
solver. These algorithms all seem to map well to stream processors. Our
conclusion is that it is possible to build much more cost-effective
supercomputers for scientific calculations.


Marshall Bern talk announcement

Marshall Bern, Palo Alto Research Center (PARC), Wednesday, November 13th 2002, 4 - 5pm, 070 Moore;Two Computer Vision Problems in Structural Biology; I will talk about two computer vision / image processing problems,
both arising in efforts to determine the structures of biological molecules.
The first problem is automatic classification of drop images from a high-throughput protein crystallization system. A robot prepares 100,000 crystallization experiments a day, and takes photographs of experiments in progress.Now someone or something has to decide whether a crystal (or a precipitate or nothing) has formed inside each drop of solution. The second problem is to pick out the locations of individual molecules in very noisy, low-contrast electron microscope (cryo-EM) micrographs.
Using something like computed tomography, the picked images can be combined to give a 3D reconstruction of the molecule.
Resolution depends upon the number of molecule images:
10 Angstrom resolution has been achieved with 50,000 hand-picked images, but atomic resolution (3 Angstrom) will require millions of images.


Jason Mitchell talk announcement

Jason L. Mitchell, ATI Research, Monday, October 28th 2002, 12:30pm - 2pm, 123 Lauritsen;Hacking Next-Generation Programmable Graphics Hardware; This lecture will describe the the latest generation of programmable
commodity graphics processing units (GPUs) and the applications they
enable. This includes use of programmable graphics hardware to perform
both rendering and non-rendering computations. On the rendering side, we
will illustrate a variety of techniques such as GPU-side animation,
high-dynamic range rendering and non-photorealistic rendering. On the
non-rendering side, we will demonstrate use of a programmable GPU to run a
3D fire simulation on a lattice for subsequent rendering on a burning
object using fur-like rendering techniques.

We will also demonstrate the ability to perform sophisticated image processing such as real-time transformation to and from the frequency domain with a Fourier transform executed on the GPU. Finally, the importance of high-level shading
languages will also be underscored with several examples and an analysis
of Microsoft's industry standard DirectX 9 High Level Shading Language compiler.


Chas Boyd talk announcement

Chas Boyd, PM Direct3D (r) Microsoft Corporation, Wednesday, October 23rd 2002, 12:30pm - 2pm, 123 Lauritsen, Hands on Lab with lead developers of Direct3D, 4-6pm, Jorgensen Intel Lab 154; Plug into the Power of X!; Fully programmable graphics cards with full floating point support enable a
whole new level of realism in real time graphics and use of the graphics
processing unit for physical simulation and many other "non-graphics" tasks.
DirectX is a standardized interface that provides access to these new
features through a high level interface that allows the same code to run on
implementations from multiple GPU vendors. It supports GPU programming
using both assembly-level and high-level programming models. In the lecture
we will cover some of the background of DirectX such as the overall
architecture, key concepts, and the High Level Shading Language (HLSL)
interface to programming vertex and pixel programs. Data access methods for
multipass algorithms as commonly used in general GPU programming will be
covered. Sample simulation applications include the wave equation and
computation of potential surfaces. The lecture will be augmented with a hands-on lab in the afternoon providing an introduction high level development tools to access the programmable graphics hardware. In this lab students will be able to develop, debug and step through GPU code for
sample applications using the VC7 IDE.


Bill Mark talk announcement

Bill Mark, of nVidia Corp. and UT Austin, Monday, October 21st 2002, 12:30pm - 2pm, 123 Lauritsen; Programmable Graphics Hardware: Beyond Real-Time Movie Rendering; The latest generation of 3D PC graphics hardware (GPUs) includes highly-programmable floating-point vertex and pixel-fragment processors. These processors are flexible enough to support high-level
C-like programming languages.
GPU designers have added programmability to these GPUs mostly to support
procedural shading capabilities similar to those used in off-line movie
rendering. But, much of the impact of these GPUs may come from the fact
that they are the first highly parallel processors that are deployed on
every desktop and are user programmable. The stream-processing
programming model used by these GPUs can be used to efficiently support
a wide variety of algorithms, including ray tracing and various types of
physical simulation.
The speaker lead the design effort at NVIDIA for Cg, a C-like language for GPU programming. This talk will describe the design goals of Cg,
explain some of the key design decisions in the language, and summarize
Cg’s programming model and capabilities.


Jonathan Shewchuk  talk announcement
Jonathan Shewchuk, Department of Electrical Engineering & Computer Sciences, University of California at Berkeley, Wednesday, October 9th,4-5p.m. Moore 070; Constrained Delaunay Tetrahedralizations and Provably Good Mesh Generation; Unstructured meshes of Delaunay or Delaunay-like tetrahedra have many advantages in such applications as rendering and visualization, interpolation, and numerical methods for simulating physical
phenomena such as mechanical deformation, heat transfer, and fluid flow.

One of the most difficult parts of tetrahedral mesh generation is
forcing the mesh to conform to complicated domain boundaries having
small angles. Although simple tetrahedralization algorithms are
known, they tend to produce poorly-shaped tetrahedra. Recovering
domain boundaries while maintaining the Delaunay property is
difficult.

My solution to this problem combines a theory about the existence of
constrained Delaunay tetrahedralizations with an algorithm that
carefully chooses new vertices to guarantee existence. The boundary
recovery algorithm is ``provably good'' in the sense that the edges
of the boundary-conforming mesh are not much shorter than necessary
for a good-quality mesh.

The boundary recovery step is the first step for a mesh generation
algorithm that refines the constrained Delaunay tetrahedralization by
inserting additional vertices. These vertices are placed so as to
improve the quality of the mesh for interpolation and finite element
methods. The refinement algorithm maintains guaranteed bounds on
edge lengths, and also offers some guarantees on tetrahedron shapes.


Matthew Papakipos talk announcement

Matthew Papakipos, of nVidia Corp. Director of Architecture, Monday, October 7th 2002, 1pm, 123 Lauritsen; How long can Graphics Chips exceed Moore's Law?; A few short years ago, single-chip PC 3D graphics solutions arrived on the market at performance levels that rivaled Professional Workstations with multi-chip graphics pipelines. Since then, graphics performance has grown at a rate approaching doubling every 6 months, far exceeding Moore's Law.

How is this possible? Will it be sustainable? There is evidence that
this geometric performance growth is not only possible, but inevitable.
The reason lies in the way that Graphics Architectures have evolved, and the
fact that this evolution has taken a very different path than CPUs. As
GPUs become more flexible, powerful, and programmable, their architecture is well-suited to embrace the parallelism that is inherent in graphics,
shading, and other hard computational problems.


Alex Keller course announcement Alex Keller, Numerical Algorithms Group, Dept. of Computer Science, University of Kaiserslautern, July 30th thru August 3rd, 9:30 a.m. - 12:00 noon, Powell-Booth 100; Beyond Monte Carlo; Monte Carlo methods are based on probability theory and are realized by simulations of random numbers. Quasi-Monte Carlo algorithms are based on number theory and are realized by deterministic low discrepancy points. Using low discrepancy sampling in the right way yields much faster rendering algorithms.

This course presents new and strikingly simple algorithms for the efficient generation of deterministic and randomized low discrepancy point sets, introduces the principles of quasi-Monte Carlo integration and Monte Carlo extensions of quasi-Monte Carlo algorithms, and finally provides practical insight by example hard- and software rendering algorithms that benefit from low discrepancy sampling.

This course is accessible to an audience with a principal understanding of basic ideas of Monte Carlo soft- and hardware rendering algorithms. The tutorial will teach the following topics: The concept of low discrepancy sampling, quasi-Monte Carlo integration techniques, Monte Carlo extensions of quasi-Monte Carlo integration, beneficial applications of deterministic and randomized low discrepancy sampling to hard- and software rendering algorithms. Participants in this course will learn how to construct simpler and much faster rendering algorithms by using number theoretic sampling methods. The course provides the simple algorithms, insight into the theory underneath, and gives application examples for hard- and software rendering.

The course schedule is as follows

  • Lecture 1 : Random Sampling and Monte Carlo Integration
  • Lecture 2 : Low Discrepancy Sampling
  • Lecture 3 : Deterministic Sampling Quasi-Monte Carlo Integration
  • Lecture 4 : Monte Carlo Extensions of Quasi-Monte Carlo
  • Lecture 5 : Applications (to Computer Graphics)
If you cannot access the course material, feel free to request it by email.

Igor Gusgov talk announcement Igor Gusgov, Princeton University Program of Applied Mathematics, Thursday, February 18, 4:30pm, Beckman Institute Auditorium; Irregular Subdivision and Signal Processing for Arbitrary Surface Triangulations; Recent progress in 3D acquisition techniques and mesh simplification methods has made triangulated mesh hierarchies of arbitrary topology a basic geometric modeling primitive. These meshes typically have no regular structure so that classical processing methods such as Fourier and Wavelet transforms do not immediately apply.

In this talk I will report on some very recent work which is aimed at building signal processing type algorithms for unstructured surface triangulations. In particular I will introduce a new non-uniform relaxation technique which lets us build a Burt-Adelson type detail pyramid on top of a mesh simplification hierarchy (Progressive Meshes of Hoppe). The resulting multiresolution hierarchy makes it easy to perform a full range of standard signal processing tasks such as smoothing, enhancement, filtering and editing of arbitrary surface triangulations. I will explain the basic components of our approach, the motivation behind it, and show some examples demonstrating the power of our method.

This is a joint work with Wim Sweldens and Peter Schröder.


Vibeke Sorensen talk announcement Vibeke Sorensen, USC School of Cinema and Television, Wednesday, May 20, 4pm, Jorgensen 74; Recent Explorations in Computer Art and Animation; Vibeke Sorensen will be showing and discussing her work in computer art and animation, focusing on her recent interactive and collaborative work. This includes stereoscopic animation (work-in-progress and Maya, 1993) and software (DrawStereo,1993/98), and interactive web based work, including "MindShipMind." The latter is in collaboration with Austrian composer Karlheinz Essl and based on the writings of 30 artists and scientists at a 3 week seminar called "Order, Complexity, and Beauty," the MindShip in Copenhagen, Denmark in 1996. She will also discuss her "Global Visual Music Jam Session", in collaboration with UC San Diego Music Department professors Miller Puckette (mathematician and computer scientist) and Rand Steiger (composer). They are developing a new multi-media programming language which allows users to combine 2 and 3-D computer graphics and animation, digital video, and computer sound and music for real-time, improvised multi-media performance. Finally, she will review her "Display Technology for Computer Art" in which she is working with USC Chemistry professor and Caltech Chemistry Department alumnus, Dr. Mark Thompson, on the development of new, light emitting displays for still and moving images.

Kari Pulli talk announcement Kari Pulli, Stanford University, Tuesday, May 19, 10.30am-12noon, Moore 80; Scanning and Displaying Colored 3D Objects; In this talk I will describe two projects related to scanning and displaying 3D objects. The first project covers my thesis work completed at University of Washington. In this project, we used stereo with structured light to capture geometry and color of 3D objects. Several views of the object were then registered into a single coordinate system and an initial surface estimate was created using space carving. This initial estimate was then refined using mesh optimization techniques. Finally, the color and geometry information was combined using view-dependent texturing.The second project I will discuss is the Digital Michelangelo project at Stanford University. I will discuss how we plan to scan several Michelangelo sculptures, where are we now, what kind of problems do we foresee.

Steve Seitz talk announcement Steven Seitz, Microsoft Research, Tuesday, May 12, 10.30am, Moore 80; Viewing and Manipulating 3D Scenes Through Photographs; The problem of acquiring and manipulating photorealistic visual models of real scenes is a fast-growing new research area that has spawned successful commercial products like Apple's QuickTime VR. An ideal solution is one that enables (1) photorealism, (2) real-time user-control of viewpoint, and (3) changes in illumination and scene structure.In this talk I will describe recent work that seeks to achieve these goals by processing a set of input images (i.e. photographs) of a scene to effect changes in camera viewpoint and 3D editing operations. Camera viewpoint changes are achieved by manipulating the input images to synthesize new scene views of photographic quality and detail. 3D scene modifications are performed interactively, via user pixel edits to individual images. These edits are automatically propagated to other images in order to preserve physical coherence between different views of the scene. Because all of these operations require accurate correspondence, I will discuss the image correspondence problem in detail and present new results and algorithms that are particularly suited for image based rendering and editing applications.

Leo Guibas talk announcement Leo Guibas, Stanford, Friday, March 27, 3pm, Jorgensen 74; Kinetic Data Structures; Suppose we are simulating a collection of continuously moving bodies, rigid or deformable, whose instantaneous motion follows known laws. As the simulation proceeds, we are interested in maintaining certain quantities of interest (for example, the separation of the closest pair of objects), or detecting certain discrete events (for example, collisions -- which may alter the motion laws of the objects). In this talk we will present a general framework for addressing such problems and tools for designing and analyzing relevant algorithms, which we call kinetic data structures. The resulting techniques satisfy three desirable properties: (1) they exploit the continuity of the motion of the objects to gain efficiency, (2) the number of events processed by the algorithms is close to the minimum necessary in the worst case, and (3) any object may change its `flight plan' at any moment with a low cost update to the simulation data structures.

Herbert Edelsbrunner talk announcement Herbert Edelsbrunner, UIUC, Friday, March 20, 3pm, Baxter Lecture Hall; Complex Geometry for Modeling Biomolecules; The use of geometric models for molecular conformations dates back at least to Lee and Richards who in the 70s defined the solvent accessible (SA) model as the union of spherical balls representing atoms. Soon after, Richards and Greer introduced the molecular surface (MS) model as a smooth and possibly more realistic variant of the SA model. We will introduce the new molecular skin (SK) similar to the MS model that has an additional symmetry relevant in studying questions of complementarity.

This talk introduces the alpha complex as the dual of the Voronoi decomposition of an SA model. The complex is a combinatorial object that leads to fast and robust algorithms for visualizing and analyzing geometric models of molecules. As an example we will see that the alpha complex can be used to compute the precise volume and surface area of an SA model without constructing it. The alpha complex offers a direct method to defining and computing cavities of molecules. Recent biological studies provide evidence for the physical relevance of this cavity definition.


Michael Gleicher talk announcement Michael Gleicher, Autodesk, Wednesday, February 25, 4pm, Jorgensen 74; Editing and Retargetting Animated Motion with Spacetime Constraints; Most motion for computer animation is single purpose: it applies to a particular character performing a particular action. In this talk, I will describe work on making motion more reusable by providing tools that adapt previously created motions to new situations. The approach views the task of finding an adapted motion as a constrained optimization problem: compute the motion that best preserves the desirable properties of the original, subject to meeting the demands of the new situation. The approach is a variation of Spacetime Constraints as it requires a solver to consider the entire motion simultaneously. By careful choice in how we pose the problem, by judicious use of simplifications and approximations, and by careful implementation, the approach can be made practical. I will show how the approach can be used to provide direct-manipulation editing of animated motion and to retarget motions to new characters.

Mathieu Desbrun talk announcement Mathieu Desbrun, iMAGIS, Grenoble, Monday, February 2, 4pm, Jorgensen 74; Animation of Highly Deformable Materials; Mathieu will speak about his recent work on using particles and implicit skin models to animate highly deformable materials in a physically realistic fashion.

Doug Roble talk announcement Doug Roble, DigitalDomain, Wednesday, October 15, 4pm, Jorgensen 74; Integrating Sepcial Effects with Life Action Film -- Computer Vision Techniques and Challenges; Doug will speak about some of the lessons learned from applying computer vision techniques such as pose estimation to the problem of integration live action footage with computer generated imagery. Doug is a software developer at DigitalDomain and has worked on many movies including this year's Dante's Peak.

Copyright © 2003 Peter Schröder Last modified: Thurs Dec 19th 10:22:14 PST 2002