Tuesdays at 4:10 in 
For more information, contact

Spring 2014
April 22  Terrance Pendleton (ISU) An analytical and numerical study of a class of nonlinear evolutionary PDEs
April 29  Kurt Bryan (RoseHulman University) Making do with less: An introduction to compressed sensing
SIAM Visiting Lecture Series  hosted by Paul Sacks
Abstracts
Terrance Pendleton (ISU) An analytical and numerical study of a class of nonlinear evolutionary PDEs
This talk concerns itself with an analytical and numerical study of a family of evolutionary partial differential equations (PDEs) which supports peakon solutions for special values of a given bifurcation parameter. Here, the bifurcation parameter describes the balance between convection and stretching for small viscosity in the dynamics of one dimensional (1D) nonlinear waves in fluids.The first portion of this talk is to provide global existence and uniqueness results for the considered family of evolutionary PDEs by establishing convergence results for the particle method applied to these equations. This particular class of PDEs is a collection of strongly nonlinear equations which yield traveling wave solutions and can be used to model a variety of flows in fluid dynamics. We apply a particle method to the studied evolutionary equations and provide a new selfcontained method for proving its convergence. The latter is accomplished by using the concept of spacetime bounded variation and the associated compactness properties. From this result, we prove the existence of a unique global weak solution in some special cases and obtain stronger regularity properties of the solution than previously established.
If time permits, we investigate the dynamics of the interaction among a special class of solutions of the onedimensional CamassaHolm (CH) equation which is a particular example of such a PDE which supports peakon solutions. The equation yields soliton solutions whose identity is preserved through nonlinear interactions. These solutions are characterized by a discontinuity at the peak in the wave shape and are thus called peakon solutions. We apply a particle method to the CH equation and show that the nonlinear interaction among the peakon solutions resembles an elastic collision, i.e., the total energy and momentum of the system before the peakon interaction is equal to the total energy and momentum of the system after the collision. From this result, we provide several numerical illustrations which supports the analytical study, as well as showcase the merits of using a particle method to simulate solutions to the CH equation under a wide class of initial data.
Stephen Kleene (MIT) Logarithmically spiralling helicoids
Logarithmic spirals are curves generated by one parameter groups of similarity transformations. We construct helicoidal minimal surfaces embedded in tubes with axes modeled on these curves with a direct and dirty approach. The problem is related to that of finding minimal laminations on curves with prescribed singularities and more generally obtaining minimal surfaces in tubes by bending minimal surfaces along an axis of periodicity. We will discuss the basic techniques involved and some of their applications. (j/ with Christine Breiner
Tim Chumley (ISU) Random billiard models
We introduce a class of random dynamical systems, called random billiards, derived from mathematical billiards by making certain dynamical variables random. Our main interest is the interplay between billiard geometry and stochastic properties of the derived random processes. To this end, we present a series of probabilistic limit theorems intended to shed light on this interplay for both a large class of billiards and for some specific examples. Time permitting, we will also discuss how random billiards can serve as concrete models for investigating basic issues in statistical mechanics such as approach to thermal equilibrium.
Kyle Mandli (University of Texas at Austin) Approaches to forecasting storm surge more quickly and accurately
Hurricanes and typhoons can cause significant human and economic costs to coastal communities. The rise of the sea surface in response to wind and pressure forcing from these storms, called storm surge, can have a devastating effect on the coastline. Therefore, the ability to quickly and accurately predict storm surge location and characteristics has been recognized as being critical in these areas.
Computational approaches to this problem must be able to handle its multiscale nature while remaining computationally tractable and physically relevant. This has commonly been accomplished by solving a depthaveraged set of fluid equations and by employing nonuniform and unstructured grids. These approaches, however, have often had shortcomings due to computational expense, the need for involved model tuning, and missing physics.
In this talk, I will outline some of the approaches we have developed to address several of these shortcomings through the use of advanced computational techniques. These include adaptive mesh refinement, higher levels of parallelism including manycore technologies, and more accurate model equations such as the multilayer shallow water equations. Combining these new approaches promises to address some of the problems in current stateoftheart models while continuing to decrease the computational overhead needed to calculate a forecast.
Kurt Bryan (RoseHulman University) Making do with less: An introduction to compressed sensing
Suppose a bag contains 100 marbles, each with mass 10 grams, except for one defective offmass marble. Given an accurate electronic balance that can accommodate anywhere from one to 100 marbles at a time, how would you find the defective marble with the fewest number of weighings? You've probably thought about this kind of problem and know the answer. But what if there are two bad marbles, each of unknown mass? Or three or more? An efficient scheme isn't so easy to figure out now, is it? Is there a strategy that's both efficient and generalizable?
The answer is "yes," at least if the number of defective marbles is sufficiently small.
Surprisingly, the procedure involves a strong dose of randomness.
It's a nice example of a new and very active topic called "compressed sensing" (CS), that spans mathematics, signal processing, statistics, and computer science. In this talk I'll explain the central ideas, which require nothing more than simple matrix algebra and elementary probability.
Diana Gonzalez (Iowa Board of Regents)
Diana Gonzalez joined the Board of Regents staff in September 1998. She holds degrees from the University of Wisconsin  Milwaukee (MBA), the University of North Texas (Ph.D. and M.Ed.), and Texas Woman's University (BA). After a number of years as a faculty member in the mathematics department of a community college in Texas, she served for more than 20 years as Vice President for Planning and Analysis at a technical college in Wisconsin. Subsequently, Dr. Gonzalez served as Vice President for Instruction at a community college in Texas. In her administrative capacity, she served as a consultantevaluator on a number of accreditation visits for the North Central Association/Higher Learning Commission and oversaw the preparation of several institutional and programmatic accreditation reports. She has been consulted by numerous colleges and agencies regarding the development of comprehensive models for strategic planning. She has also served as a consultant in the areas of student outcomes assessment, institutional effectiveness, and distance learning. She is a member of Beta Gamma Sigma and Phi Delta Kappa academic honorary societies. Dr. Gonzalez oversees the academic programs at the three Regents’ universities, and she is also very involved in high school to college transition issues, such as Common Core, SBAC, and transition guides. She believes in evidencebased program design.
Zheng (Tracy) Ke(Princeton) Covariance assisted screening and estimation
We consider the problem of variable selection in the very challenging regime where the signals (nonzero regression coecients) are rare and weak and columns of the design matrix are heavily correlated. We demonstrate that in the presence of Rare/Weak signals, many classical methods and ambitious contemporary algorithms face pitfalls. The situation is worsen in the presence of heavy correlations among design variables.
We propose a new variable selection approach which we call Covariance Assisted Screening and Estimation (CASE). CASE is a twostage multivariate Screen and Clean algorithm. CASE has two layers of innovations. In the rst layer, we alleviate the heavy correlations of the design variables by linear ltering and use the postltering model to construct a sparse graph. In the second layer, we use the sparse graph to guide both the screening and cleaning.
We explain how CASE overcomes the wellknown computational hurdle of multivariate screening. We also explain how CASE overcomes the socalled challenge of \signal cancella tion", so its success is not tied to strong signals or any types of incoherence/irrepresentable conditions.
We set up a theoretical framework where we show CASE obtains the optimal rate of convergence in terms of Hamming errors. We have successfully applied CASE to the long memory time series and a changepoint model, where the optimality is further investigated with the socalled notion of phase diagram.
Sumit Mukherjee (Stanford) ERGM with unknown normalizing constant
Exponential families of probability measures has been a subject of considerable interest in Statistics,
both in theoretical and applied areas. This talk will mostly concentrate on Exponential Random
Graph Models (in short ERGM), i.e. exponential families on the space of graphs. One of the
problems that frequently arise in such models is that the normalizing constant is not known in
closed form. Also numerical computation of the normalizing constant is infeasible because the size
of the space becomes large, even when we consider moderate sized graphs. As such, carrying out
inferential procedures such as MLE is hard to do. Also, it has been empirically observed in social
science literature that choice of some sucient statistics for the ERGMs makes the model "degenerate", thus making such models unsuitable for modeling purposes.
This talk will address these issues, and gives a framework for analyzing a class of sparse ERGMs using Large Deviations Theory. The large deviation used is for the empirical degree distribution of an Erd}osRenyi graph, with a topology stronger than weak convergence. This analysis gives an approximation for the normalizing constant when the size of the graph is large. It also gives one quantication for degeneracy, and gives a simple check to see if a particular ERGM is non degenerate. As an application, we explain nondegeneracy of certain graph statistics popular in the social science literature which are known to not cause degeneracy at an empirical level.
Xavier PerezGimenez (U of Waterloo) Arboricity and spanningtree packing in random graphs with an application to load balancing
We study the arboricity A and the maximum number T of edgedisjoint spanning trees of the ErdosRenyi random graph G(n,p). For all p(n) in [0,1], we show that, with high probability, T is precisely the minimum between delta and floor(m/(n1)), where delta is the smallest degree of the graph and m denotes the number of edges. Moreover, we explicitly determine a sharp threshold value for p such that: above this threshold, T equals floor(m/(n1)) and A equals ceiling(m/(n1)); and below this threshold, T equals delta, and we give a twovalue concentration result for the arboricity A in that range. Finally, we include a stronger version of these results in the context of the random graph process where the edges are sequentially added one by one. A direct application of our result gives a sharp threshold for the maximum load being at most k in the twochoice load balancing problem, where k goes to infinity. This research is joint work with Pu Gao and Cristiane
M. Sato.
Michael Young (ISU) Ramsey and AntiRamsey multiplicities
A classic problem in Ramsey theory is determining, for a given graph G, the largest value of n such that there exist an edge coloring of the complete graph on n vertices that does not contain a monochromatic subgraph that is isomorphic to G. This talk will discuss, asymptotically, how many monochromatic copies of G must exist in an edge coloring of the complete graph on n vertices. This value is known as the Ramsey Multiplicity. A graph is rainbow if each edge of the graph is distinctly colored. We will also discuss AntiRamsey Multiplicities, which is the asymptotic maximum number of rainbow copies of a graph G that can exist in an edge coloring of the complete graph on n vertices.
Paul Tien (College of William and Mary) Modeling of tumor growth with therapies and tumor stem cell initiation
Mathematics is playing important roles in biological and biomedical research. In this talk, I will illustrate that mathematics can be a effective tool to improve our understanding of biomedical processes and help to develop more effective treatment plans. As showcases, I will talk about mathematical modeling for glioma virotherapy and for traditional therapies of glioblastoma, and how computational and mathematical analysis can help to find answers to specific medical problems. I will also mention that mathematical models can be used to study tumor stem cell initiation.
Elad AignerHorev (Hamburg) Transference problems in combinatorics
Following a brief introduction to extremal graph theory we shall proceed to consider the recent extensions of the so called ErdosStoneSimonovits theorem. Two venues will be considered: sparse random graphs and sparse pseudorandom graphs thereafter. In particular, we shall present an ErdosStoneSimonovits type result for odd cycles in sparse pseudorandom graphs. The latter provides a solution to a conjecture of Krivelevich, Lee, and Sudakov that is a polylogarithmic factor away from the conjecture. In this manner, the notion of transference problems will be introduced.
For a different flavor of transference problems we shall present a result concerning the transference of the so called FurstenburgSarkozy theorem to sparse pseudorandom sets. The latter theorem addresses the emergence of two point configurations separated by a prechosen polynomial gap.
The first part of the talk is based on a joint work with Hiep Han and Mathias Schacht. The second part is based on a joint work with Hiep Han.
Zhan Chen (U Minnesota)Multiscale modeling and computation of biomolecular solvation and tumor growth
Multiscale modeling is a growing need in mathematical biology because of the large number of involved variables and the range of scales for realistic simulations of complex biological systems. Challenging mathematical and computational questions are naturally raised from those multiscale modeling in biology. In this talk I will present our novel differential geometry based multiscale modeling of biomolecular solvation as well as a hybrid model of tumor growth. In our solvation model, the differential geometry theory of surfaces is employed as a natural means to couple the macroscopic continuum mechanical description of the aquatic environment with the microscopic discrete atomic description of the macromolecule. In the hybrid tumor model, we treat the mechanics and growth of individual tumor cells, while modeling the nutrients and the mechanics of the Extracellular matrix (ECM) as continuum. This model is suitable to investigate numerous experiments with highlylocalized effects in tumor research which one may not be able to address by discrete cell or continuum models alone.
Ruchira Datta (UCSF) Modeling the dynamics of tumor heterogeneity
Simon Tavaré introduced Approximate Bayesian Computation (ABC) for simulating posterior distributions where computing the likelihood is intractable. In 2010, Tavaré and Andrea Sottoriva extended ABC to complex models from cancer biology. On the other hand, Hisashi Ohtsuki, Martin Nowak and colleagues have introduced evolutionary graph theory, the study of evolutionary dynamics on graphs, and have proved results about time to fixation (i.e., for one subpopulation to take over the graph) in several settings. We will recapitulate these results and also characterize the nontrivial spatial distributions that arise and are observed during clinical timescales. We will use measures of spatial/spatiotemporal statistics of point processes arising from a variety of fields as summary statistics for ABC, to see if we can use them to distinguish regimes of interest in the parameter space corresponding to different hypotheses (e.g., cooperation, competition, and neutral coexistence of different subpopulations occupying the graph). We will apply the resulting new statistical tests to images of heterogeneous tumors, informing the course of therapy. Time permitting, we will touch on the connection with our work on the theory of graphical games. This research is in progress.
Bernard Lidicky (UIUC) Applications of Flag Algebras in hypercubes and permutations
Flag algebras is a method, recently developed by
Razborov, designed for attacking problems in extremal graph theory. There are recent applications of the
method also in discrete geometry or permutation patterns. The aim of talk is to give a gentle introduction to the
method and show some of its applications to hypercubes
and permutations.
The talk is based on a joint works with J. Balogh, P. Hu, H. Liu, O. Pikhurko, B. Udvari, and J. Volec.
Hao Huang (Rutgers) The minimum number of nonnegative edges in hypergraphs
Extremal combinatorics studies the maximum or minimum possible size of a combinatorial structure satisfying certain properties. In this talk I will review some results and recent developments in this field and their connections with other areas, and then focus on the following extremal problem. A hypergraph H is said to have the MMS property if for every assignment of weights to its vertices with nonnegative sum, the number of edges whose total weight is nonnegative is at least the minimum degree of H. We show that all sufficiently large hypergraphs with equal codegrees have the MMS property, and prove a longstanding conjecture by Manickam, Miklos, and Singhi as a corollary.
David Sivak (UCSF) Free energy, optimal control, and optimal response in microscopic nonequilibrium systems
Molecular machines are protein complexes that convert between different forms of energy, and they feature prominently in essentially any major cell biological process. A plausible hypothesis holds that evolution has sculpted these machines to efficiently transmit energy and information in their natural contexts, where energetic fluctuations are large and nonequilibrium driving forces are strong. Toward a systematic picture of efficient, stochastic, nonequilibrium energy and information transmission, I present theoretical developments in three distinct yet related areas of nonequilibrium statistical mechanics: How can we measure how far from equilibrium a driven system is? How do we find efficient methods to push a system rapidly from one state to another? And finally, what are generic properties of systems that efficiently harness the energy and information present in environmental fluctuations?
Yinxiao Huang (Univ Illinois UrbanaChampagne) Recursive nonparametric estimation for time series
Online or recursive estimation is natural for forecasting, and
is computationally attractive by fast realtime updates when a
new data item becomes available. In this talk we consider online
kernel estimation for general time series models that satisfy
the predictive dependence measure of Wu (2005). For a large
class of stationary time series that are short and longrange
dependent, we will study the asymptotic properties for both the
recursive density and the recursive regression estimators. We
will characterize the asymptotic almost sure behaviors of the
recursive estimators by deriving the sharp laws of the iterated
logarithms, which are important for online procedures. We will
also investigate the asymptotic normality, and the almost sure
version of the optimal uniform convergence rates.
Shirshendu Chatterjee (NYU) Aspects of longrange firstpassage percolation
Y.T. Poon (ISU) Spectral inequalities and quantum marginal problems
The characterization of the spectra of Hermitian matrices $A$, $B$ and $C$ satisfying $A+B=C$ began with Weyl's inequalities in 1912. The problem was solved 15 years ago with conditions given by a set of linear inequalities. Recently, similar inequalities arise in the solution of some quantum marginal problems. However, in both cases, the number of inequalities grows exponentially with the size of the matrices.
In this talk, we will discuss the connection among these and other related problems. In particular, we are interested in specific problems for which the solution can be given by a small set of inequalities.
Misun Min(Argonne National Lab) joint with CAM Scalable highorder algorithms and simulations for electromagnetics and fluids
We demonstrate scalable highorder algorithms based on spectral element, spectral element discontinuous Galerkin, and spectral element discontinuous Galerkin lattice Boltzmann method approaches for transport simulations in electromagnetics and fluids. For electromagnetic systems, we consider the timedomain and frequencydomain formulations for wave scattering and absorption problems. For fluid systems, we consider highorder lattice Boltzmann approaches for turbulence and heat transfer simulations. Algorithmic efforts include efficient parallelization and performance analysis, and realistic simulations with validation. Our computational methodologies will be further extended for efficient and accurate predictive modeling for finding the right materials and the optimal structure of solar cells possessing high energyconversion efficiency and for understanding heat transfer mechanisms for multiphase flow systems.
There are ideas about set size (measure) that I could have easily understood as a graduate student but somehow missed.
I will explain my favorites  my favorite episodes in the search for the ideal measure  with highlights including, for example, Archimedes's discovery of the volume of a ball; Wallis's product formula for $\pi$, the role it played in Fourier's work on heat, which in turn prompted Riemann's formal definition of integration; the infamous HausdorffBanachTarski paradox in response to perceived difficulties with Lebesgue integration, with the reluctant conclusion that some sets are simply too complicated to be assigned a size; the fat and spacefilling curves of Peano, Hilbert, and Polya; and current research on the constrained isoperimetric inequality.
Joe Mileti (Grinell College) (Non)computable algebra
There is an obvious, but inefficient, procedure to determine whether a given natural number is prime: simply check whether any of the finitely many smaller numbers evenly divide into it. Despite the fact that such a naive approach does not work in integral domains like Z[x] or the Gaussian integers Z[i], with a bit of theory one can still develop computational procedures that work to determine the prime elements in these cases. In some settings, such as when working with the ring of integers in algebraic number fields, one often prefers to work with ideals over elements, but then it becomes desirable to determine computationally whether a given element is in an ideal. We investigate the question of whether certain ideals and/or the set of prime elements are necessarily computable in certain classes of computable rings.
Wei Wang, (Florida International Univ) Multiscale discontinuous Galerkin methods for second order elliptic equations
In this talk we will introduce the multiscale discontinuous Galerkin (DG) method for solving a class of second order elliptic problems. When the equations contain small scales or oscillating coefficients, traditional numerical methods require extremely refined meshes to resolve the small scale structure of solutions, which brings numerical difficulties. The main ingredient of our method is to incorporate the small scales into finite element basis functions so that the method can capture the multiscale solution on coarse meshes.
James Rossmanith (ISU)Numerical methods for hyperbolic conservation laws with application to plasma physics
Hyperbolic conservation laws are systems of fi rstorder partial diff erential equations that model the dynamics of linear and nonlinear wave phenomena.
Numerical methods for approximately solving such equations must satisfy certain basic conditions in order to achieve physically meaningful results, including numerical conservation and various consistency requirements.
In specific application areas, many additional properties, including
conservation of ancillary quantities and the satisfaction of constraints, must
be satisfi ed in order to achieve accurate numerical results.
In this work we are concerned with the development and application of
numerical methods for various models in plasma physics. Plasma is a gas to
which suffi cient energy has been supplied to dissociate electrons from their
nuclei, thus forming a collection of positively and negatively charged ions.
The presence of charge carriers makes the plasma electrically conducting
so that it responds strongly to electromagnetic fields.
Specifi cally, we describe in this talk recent work on three classes of plasma
models.
1. Magnetohydrodynamics: we develop a highorder constrained transport
approach, motivated by Whitney forms from discrete exterior calculus, to
guarantee a globally divergencefree magnetic field.
2. Twofl uid models: we develop highorder schemes to solve a class of
quadraturebased momentclosure models and apply this to the problem
of magnetic reconnection. 3. Kinetic VlasovPoisson: we develop highorder
schemes to solve the VlasovPoisson system with the property that mass,
total energy, and positivity of the distribution function are maintained.
The work presented here is joint with several people, including C. Helzel
(Bochum), D. Seal (Michigan State), and Y. Cheng (Wisconsin).
Mike Ferrara (UC Denver)Realization problems for degree sequences
The {\it degree} of a vertex $v$ in a graph $G$ is the number of edges incident to $v$, and the {\it degree sequence} of $G$ is the list of degrees of the vertices of $G$. A nonnegative integer sequence $\pi$ is then said to be {\it graphic} if it is the degree sequence of some graph $G$, and in this case we call $G$ a {\it realization} of $\pi$. While there are a number of efficient ways to determine if a given sequence is graphic, it is also the case that a particular graphic sequence may have a large number of number of nonisomorphic realizations. Thus, it is of interested to study the properties that arise over the family of realizations of a sequence.
In this talk, we will primarily focus on {\it potential} problems for graphic sequences, wherein we wish to determine when at least one realization of a given sequence has a particular property. Specifically, we will examine potential degree sequence analogues to the Tur\'{a}n function and graph packing, both of which are widely studied in classical graph theory. We will also discuss some interesting applications to discrete imaging and the study of complex networks.
Ryan Martin (ISU) The edit distance in graphs
In this talk, we will discuss the edit distance function, a function of a hereditary property [i]H[/i] and of [i]p[/i], which measures the maximum proportion of edges in a density[i]p[/i] graph that need to be inserted/deleted in order to transform it into a member of [i]H[/i]. We will describe a method of computing this function and give some results that have been attained using it. The edit distance problem has applications in property testing and evolutionary biology and is closely related to wellstudied Turántype problems. The results we address will include cyclefree graphs and shows a close relationship between the problem of Zarankiewicz as well as strongly regular graphs. This is joint work with many collaborators, including former graduate students Tracy McKay, Dickinson College and Chelsea Peck.
Marilyn Carlson (ASU) A researchbased approach fro improving precalculus teaching and learning
The function concept is a central idea of precalculus and beginning calculus and is used for modeling in the sciences, yet many students complete courses in precalculus with weak understandings of this concept. Students who are unable to construct meaningful function formulas to relate two varying quantities have little chance of understanding ideas of derivative, accumulation and the Fundamental Theorem of Calculus. They are also unable to compose two functions and/or use the chain rule to model relationships between changing quantities in applied problems in calculus, physics, and engineering.
The Pathways Precalculus student materials and teacher resources provide one response to this problem. In this presentation I will describe the Pathways Precalculus student materials and teacher tools that are showing positive gains in student learning of the function concept and other foundational ideas for learning calculus. Our approach to developing and refining these materials should provide a generalizable model for others interested in shifting their curriculum and instruction to support student construction of mathematical practices and knowledge for continued mathematics, science, and engineering course taking and learning.
Tim McNicholl (ISU) Asymptotic density and the Ershov Hierarchy
We discuss recent work on applications of asymptotic density in computability theory. In particular, we classify the asymptotic densities of the $\Delta^0_2$ sets according to their level in the Ershov hierarchy. In particular, it is shown that for $n \geq 2$, a real $r \in [0,1]$ is the density of an $n$c.e.\ set if and only if it is a diff erence of left$\Pi_2^0$ reals. Further, we show that the densities of the $\omega$c.e.\ sets coincide with the densities of the $\Delta^0_2$ sets, and there are $\omega$c.e.\ sets whose density is not the density of an $n$c.e. set for any $n \in \omega$. These results are joint work with Rod Downey (Wellington), Carl Jockusch (Illinois), and Paul Schupp (Illinois).
Archived
September 17  Tim McNicholl (ISU) Asymptotic density and the
Ershov Hierarchy
Poster
September 20 (FRI  Room 150)  Marilyn Carlson (ASU) A researchbased approach for improving precalculus teaching and learning
Poster
September 24  Chelsey Lass and Allison Clark from Transamerica on the actuarial profession
October 15  Ryan Martin (ISU) The edit distance in graphs
October 22  Mike Ferrara (UC Denver) Realization problems for degree sequences
October 29  James Rossmanith (ISU) Numerical methods for hyperbolic conservation laws with application to plasma physics
November 4  Wei Wang, (Florida International Univ) Multiscale discontinuous Galerkin methods for second order elliptic equations (joint with CAM)
November 5  Joe Mileti (Grinell College) (Non)computable algebra
November 12  JIm Cannon (BYU) Does every set have a size?
November 19  Misun Min (Argonne National Lab) joint with CAM Scalable highorder algorithms and simulations for electromagnetics and fluids
December 3  Y.T. Poon (ISU) Spectral inequalities and quantum marginal problems
January 21 in Carver 298  Shirshendu Chatterjee (NYU) Aspects of longrange firstpassage percolation
Probability candidate poster
Thursday, January 23 in Durham 0171 Yinxiao Huang (Univ Illinois UrbanaChampagne) Recursive nonparametric
estimation for time series
Probability candidate poster
Monday, January 27 in Carver 298  David Sivak (UCSF) Free energy, optimal control, and optimal response in microscopic nonequilibrium systems
Math Bio candidate poster
Tuesday, January 28 in Carver 202  Hao Huang (Rutgers) The minimum number of nonnegative edges in hypergraphs
Discrete Math candidate poster
Wednesday, January 29 in Carver 298  Bernard Lidicky (UiUC) Applications of Flag Algebras in hypercubes and permutations
Discrete Math candidate poster
Thursday, January 30 in Carver 196 Ruchira Datta (UCSF) Modeling the dynamics of tumor heterogeneity
Math Bio candidate poster
Tuesday, February 4 in Carver 202  Zhan Chen (U Minnesota) Multiscale modeling and computation of biomolecular solvation and tumor growth
Math Bio candidate poster
Wednesday, February 5 in Carver 298  Elad AignerHorev (Hamburg University) Transference problems in combinatorics
Discrete Math candidate poster
Thursday, February 6 in Carver 305  Jianjun Paul Tian (College of William and Mary) Modeling of tumor growth with therapies and tumor stem cell initiation
Math Bio candidate poster
Monday, February 10 in Carver 202  Michael Young (ISU) Ramsey and AntiRamsey multiplicities
Discrete Math candidate poster
Tuesday, February 11 in Carver 202  Xavier PerezGimenez (University of Waterloo) Arboricity and spanningtree packing in random graphs with an application to load balancing
Discrete Math candidate poster
Thursday, February 13 in Carver 202  Sumit Mukherjee (Stanford) ERGM with unknown normalizing constant
Probability candidate poster
RESCHEDULED FOR: Friday, February 21 in 1160 Sweeney @ 10:00 a.m.  Zheng (Tracy) Ke (Princeton) Covariance assisted screening and estimation
Probability candidate poster
Wednesday, February 26 in Carver 202  Diana Gonzalez (Iowa Board of Regents) Transition issues in early college
mathematics
Poster
March 25  Bergman, Hogben, Kliemann, Peters & Sacks (ISU) Faculty workshop: Improving the graduate program climate
April 8  Stephen Kleene (MIT) Logarithmically spiralling helicoids
April 15  Tim Chumley (ISU) Random billiard models
Monday, April 21  Kyle Mandli (University of Texas at Austin) Approaches to forecasting storm surge more quickly and accurately Joint with CAM