Title. Phase transition in the intermediate disorder regime for a directed polymer model

Abstract. I will discuss a random directed polymer model defined on a hierarchical diamond lattice. The focus will be on a so-called

Title. On the differences between consecutive primes

Abstract. In 1976, Gallagher proved that the Hardy-Littlewood prime k-tuple conjecture implies that, for the primes up to x, the number of primes in the interval (x, x + λ log x], for any fixed positive constant λ, has a Poisson distribution. Recently, Daniel A. Goldston and I showed that the number of consecutive primes with difference λ log x has the Poisson distribution superimposed on the conjectured formula for pairs of primes with this difference. In this talk, I will present more precise formulas if λ→ 0 as x → ∞. In order to obtain these formulas, it is necessary to prove some new singular series average results. If time permits, I will report new results on the limit points of the sequence (p

Abstract. The filtering problem is to take partial and noisy measurements about a dynamic process and to use them estimate in real time the complete state of the process. The filtering algorithm is constructed using models of the dynamics and measurement processes and some additional partial and noisy information about the state of the process. The algorithm must be computationally efficient enough that the state estimates can be computed in real time as the process evolves.

When the dynamics and measurement processes are linear and the additional information is about the initial state of the dynamics then the method of choice is the Kalman filter which was developed in the early sixties. It and its extensions have been employed in a variety of applications ranging from aerospace to weather prediction. In a Google search, the term “Kalman Filtering” results in over half a million hits.

But the additional information isn’t always just about the initial state of dynamics, it may include information about the final state of the dynamics. For example in missile defense applications one may have partial information about where the missile is coming from and also partial information about where it is going to. We present the Boundary Value Filter for linear processes about which partial boundary information is available. The Kalman Filter is a special case of the Boundary Value Filter.]]>

Abstract. Network Science is a rapidly growing interdisciplinary area

at the intersection of mathematics, computer science, and a

multitude of disciplines ranging from the physical and life sciences

to the social sciences and even the humanities. Network

analysis methods are now widely used in proteomics, the

study of social networks (both human and animal), finance,

ecology, bibliometric studies, archeology, the evolution

of cities, and a host of other fields.

In this talk I will introduce the audience to some of the

mathematical and computational problems and methods of complex

networks, with an emphasis on the basic notions of centrality

and communicability. More specifically, I will describe some of

the problems in large-scale sparse numerical linear algebra arising

in this area, and how they differ from the corresponding problems

encountered in more traditional applications of numerical analysis.

The talk will be accessible to students, requiring only a modest

background in linear algebra and graph theory.]]>

Abstract. We will motivate and present a natural generalization of the

algebra of matrices to an algebra of hypermatrices proposed by

Bhattacharya-Mesner in the 90s. We will show how the proposed algebra

extends to hypermatrices classical notions such as matrix Inverse,

Gram-Schmidt process, Unitarity, Combinatorial Interpretations of the

Cayley-Hamilton theorem as well as the Spectral Theorem. We will show

how this new languages provide a family of new invariants for

addressing combinatorial optimization problems which include special

instances of graph and subgraph Isomorphism problems.]]>

Abstract. Given a tensor (or hyper-matrix), we would like to express it in the simplest possible way as the sum of the smallest number of decomposable (or rank-1) tensors. While there are many algorithms that attempt to accomplish this task, it is known to be a very difficult problem. Moreover, such a decomposition may not be unique. When a generic tensor of a given format has a unique decomposition, we say that tensors of that format are "generically identifiable."

We propose a new method to find tensor decompositions via homotopy continuation. This technique allows us to find all decompositions of a given tensor (at least for relatively small tensors). Our experiments yielded a surprise - we found two new tensor formats, (3,4,5) and (2,2,2,3), where the generic tensor has a unique decomposition. Using techniques from algebraic geometry, we prove that these cases are indeed "generically identifiable".

This is joint work with J. Hauenstein, G. Ottaviani and A. Sommese.]]>

Abstract. A team of investigators from UAB’s Office of Energetics and NORC will present on some of the challenges faced by obesity researchers that pertain to the application and misapplication of the mathematical sciences. This will be followed by a discussion of some opportunities in obesity research where mathematical inquiry could lead to substantial advancement of knowledge. Finally, a description of several ongoing mathematically oriented projects will be presented. These will include projects related to evolutionary biology, statistics and measurement, and physiology.

Here is the agenda:

David Allison, "Some mathematical problems in obesity science: from the sublime to the ridiculous"

Brandon George, "Determining an Optimal Measure of Energy Intake"

Keisuke Ejima, "The possible mechanisms of obesity propagation among US population: mathematical modelling approach"

Ed Archer, "The In Silico Reverse Engineering of Human Nutrient-Energy Physiology using the “Body-as-Ecosystem” Paradigm and Agent-Based Modeling"

Peng Li, "What if the missing is not at random in obesity studies?"

Andrew Brown, "Improving the accuracy and specificity of estimates using cross-method comparisons, such as Double Sampling with Multiple Imputation"

]]>

Abstract. Banks use models to create estimates and forecasts for decision-making and for reporting to investors, regulators, and the government. Where possible, such models are based on economic, financial, or mathematical theories, and estimation and testing procedures usually involve statistics, i.e., functions of available data. Many math grads work as managers and analysts in development—design and construction—and validation or independent testing. The purpose is to provide an overview of the field as well as to indicate the interesting (read difficult) problems that arise when attempting to build faithful, reliable, and simplified representations of economic and social phenomena. We face epistemological issues, the Problem of Induction, methodological issues, resource constraints, and data limitations. Yes, it’s constrained optimization of problems we don’t completely understand. That is what makes it challenging, rewarding, and fun.

]]>

Abstract. The Conference Board of the Mathematical Sciences (CBMS) recently issued a report, “The Mathematical Education of Teachers II” as a follow-up to a report issued about 10 years ago. The CMBS represents 16 mathematical groups, including the AMS, MAA, SIAM, and NCTM. The report makes recommendations for mathematics departments relevant to university preparation of elementary (K-5), middle (5-8), and high school (8-12) teachers of mathematics. I will discuss the implications for our department in synergy with UABTeach, the Common Core State Standards in Mathematics, and the revisions to the Praxis II Examination for future high school mathematics teachers.]]>

Abstract. We use known restriction theorems for the Fourier transform to the unit sphere to prove weighted inequalities for the Fourier transform (also known as Pitts inequalities). We also prove versions of the uncertainty principle for the Fourier transform. This is a joint work with D. Gorbachev and S. Tikhonov.

]]>

Title. Absolutely continuous spectrum for Anderson models on certain graphs

Abstract. The Anderson model is a random Schrödinger operator that was introduced by nobel prize winner P. W. Anderson in order to describe the quantum motion in randomly disordered media such as doped semi-conductors. The surprising phenomenon for physicist was the so called Anderson localization (appearance of pure point spectrum) which is now mathematically quite well understood. However, it seems to be a lot more challenging to prove some of the open conjectures about quantum diffusion and in particular existence of absolutely continuous spectrum for such operators. On the lattice Ζ

]]>

Title. Thurston's Theorem in Complex Dynamics

Abstract. We will give a short introduction to the main concepts of Complex Dynamics. We will explain the statement of Thurston's characterization of rational maps, which is one of the most important results in the field, that relates topological and geometrical properties of branched covers of the 2-sphere. We will illustrate the ideas behind the proof including our own contributions to the field such as solving Pilgrims conjecture and obtaining a topological classification of canonical obstructions.

]]>

Title. The Nirenberg problem and its generalizations: A unified approach.

Abstract. The classical Nirenberg problem asks for which functions on the sphere arise as the scalar curvature of a metric that is conformal to the standard metric. In this talk, we will discuss similar questions for fractional Q-curvatures. This is equivalent to solving a family of nonlocal nonlinear equations of order less than n, where n is the dimension of the sphere. We will give a unified approach to establish existence and compactness of solutions. The main ingredient is the blow up analysis for nonlinear integral equations with critical Sobolev exponents. We will also discuss related topics including solutions with isolated singularities. This talk is based on joint works with L. Caffarelli, Y.Y. Li, Y. Sire and J. Xiong. ]]>

Abstract. We study the sequence of particles' collisions in a small system of hard balls. We demonstrate that ergodicity implies quite unusual phenomenon. The particles have preferences over long time intervals during which the particle consistently collides more with certain particles and less with others. Things look like there is effective interaction between the particles. Though the preferences change sooner or later the average waiting time to the change is infinite. The results hold for dilute gas with arbitrary short-range interactions and dense fluids of hard balls. ]]>

Abstract. We have recently developed Bayesian hierarchical GLMs for jointly analyzing numerous common (i.e., minor allelic frequency > 1%) and rare genetic variants in population-based genetic association studies. Our Bayesian hierarchical models can incorporate the distribution of genetic effects across the genome (many small values and occasional large values), the hierarchical structure of variants (i.e., variants can be mapped into genes), and biological characteristics of variants (e.g., allelic frequency, functional score) into the analysis. The proposed hierarchical modeling approach can jointly estimate the effects of individual variants and the effects of genes and pathways (i.e., the overall effects of genetic variants within a gene or pathways), and offer increased power in detecting disease-associated variants and genes. We have developed a fast algorithm to fit the proposed hierarchical GLMs by incorporating flexible expectation-maximization (EM) steps into the standard iteratively weighted least squares (IWLS). The methods have been implemented in a freely available R package BhGLM (http://www.ssg.uab.edu/bhglm/). Here I describe our models, algorithms and real data applications, and outline extensions to family-based case-control studies and extreme phenotype sampling designs.]]>

Abstract. In the recent years, the study of Kahler-Ricci flow has been generalized substantially. The key change is to allow the evolution of the Kahler class, by fully grasping the observation by Hamilton when introducing this object. The main geometric motivation is Tian's Program, i.e. the geometric analysis version of the Minimal Model Program from algebraic geometry. The method developed by Kolodziej for the complex Monge-Ampere equation in pluripotential theory of several complex variables has been playing the fundamental role in obtaining the crucial estimates. The combination of ideas and techniques from different major branches of pure mathematics has resulted in a very active research field, promoting interactions with related fields.

]]>

Abstract.

Abstract. Let (X_k)_{k \geq 1} and (Y_k)_{k\geq1} be two independent sequences of independent identically distributed random variables having the same law and taking their values in a finite alphabet \mathcal{A}_m. Let LC_n be the length of the longest common subsequence of the random words X_1\cdots X_n and Y_1\cdots Y_n. Under assumptions on the distribution of X_1, LC_n is shown to satisfy a central limit theorem. This is in contrast to the Bernoulli matching problem or to the random permutations case, where the limiting law is the Tracy-Widom one. (Joint with Umit Islak)]]>

Abstract. The quantification of uncertainties inherent to parameters, initial and boundary conditions, measured data, and models themselves is necessary to make predictions with reduced and quantified uncertainties. This requires a synergy between the underlying science, numerical and functional analysis, probability, and statistics. In this presentation, we will discuss basic issues that must be addressed when quantifying input and output uncertainties in physical and biological models. This will be motivated by discussion regarding the role of uncertainty quantification for weather and climate models, subsurface hydrology and geology, nuclear power plant design, and biology. We will then discuss global sensitivity techniques for parameter selection, Bayesian model calibration, sampling and spectral methods for uncertainty propagation, and issues pertaining to surrogate model construction. We will also indicate connections between uncertainty quantification and robust control design.

Open questions and future research directions will be noted throughout the presentation and students are encouraged to attend.

]]>

Abstract. A homogeneous topological space is one in which for any two points x and y, there is a homeomorphism of the space to itself taking x to y. Prevalent examples of homogeneous spaces include all manifolds (without boundary) and all topological groups. In the early 1920's, Knaster and Kuratowski, two giants in general topology and continuum theory, asked whether the circle is the only homogeneous continuum (compact connected space) in the plane. This question led to substantial work by a number of well known mathematicians, and several years later two spectacular new exotic homogeneous spaces were discovered. I will discuss our recent work on this question, in which we show once and for all that the circle and these other two spaces are the only homogeneous continua in the plane.

This is joint work with Lex Oversteegen.]]>

Abstract. Since Lord Rayleigh conjectured that the disk should minimize the first Laplace-Dirichlet eigenvalue among all shapes of equal area more than a century ago, eigenvalue optimization problems have been active research topics with applications in various areas including mechanical vibration, electromagnetic cavities, photonic crystals, and population dynamics. In this talk, we will review some interesting classical problems and discuss some recent developments.

]]>

Abstract. Computing reliable solutions to inverse problems is important in many applications such as biomedical imaging, computer graphics, and security. Regularization by incorporating prior knowledge is needed to stabilize the inversion process. In this talk, we develop a new framework for solving inverse problems that incorporates probabilistic information in the form of training data. We provide theoretical results for the underlying Bayes risk minimization problem and discuss efficient approaches for solving the associated empirical Bayes risk minimization problem. Various constraints can be imposed to deal with large-scale problems. Here we describe methods for computing optimal spectral filters, for cases where the SVD is available, and methods for computing an optimal low-rank regularized inverse matrix, for cases where the forward model is not known.

This is joint work with Matthias Chung (Virginia Tech) and Dianne O'Leary (University of Maryland, College Park).

]]>

Abstract. We'll cover some basics about Maxwell operator in bounded regions and introduce its rigorous mathematical definition. Most of the talk I'll be integrating by parts, so everybody is welcome to participate.

]]>

Abstract. Analyzing data collected from many areas is a challenge facing scientists and engineers. The property of being high-dimensional makes these data sets hard to tackle. Fortunately, one can work with some low-dimensional structures, because in many cases, data concentrates around a low-dimensional subspace or does so in a local neighborhood. After an introduction of the area, I will present in detail a regression problem and use that as an example to show how to capture the low-dimensional structure of data and make decisions based on the learned structure. In the regression problem, more specifically, a set of data points x and the corresponding responses y is given, one wants to find a mapping f such that f(x) approximates y well and this mapping f can be applied to unobserved instances. An algorithm with piecewise linear mappings built on a tree structure is proposed. The proposed method can be applied when both x and y are high-dimensional and can handle it well in particular when the closeness in x is not consistent with that in y. By comparing the proposed method to its competitors in experiments, it is shown to be advantageous.]]>

Abstract. The generic inverse problem arises, as a fundamental part of the modeling process, when one makes external measurements on a physical system with the intention of determining unknown internal properties of the system. One could, for example, send sound waves into a body, measure the output waves, and try to infer the internal density function for non-invasive medical imaging purposes. The list of potential applications is vast, from enhanced oil recovery techniques to land mine detection to probing the earth's interior via natural earthquake waves to estimating future stock market volatility from current option prices. We will look at the background and the mathematics behind some of these inverse problems and note that they may all be handled by adaptions of a common approach.]]>

Title. 3D Mixed Element Discontinuous Galerkin with Shock Capturing and RANS

Abstract. A parallel high-order Discontinuous Galerkin method is developed for mixed elements to solve the Navier-Stokes equations. A PDE-based artificial viscosity equation is implemented to smooth and stabilize shocks. To solve this system of non-linear equations a Newton solver is implemented and preconditioned flexible-GMRES is used to solve the linear system arising from the Jacobian matrix. The preconditioners that are implemented include Jacobi relaxation, Gauss-Seidel relaxation, line implicit Jacobi, and ILU(0). A wide variety of simulations are performed to demonstrate the capabilities of the DG solver. The inviscid simulations include a p-adapted subsonic flow over a cylinder, a p=0 h-adapted hypersonic flow over a sphere, and a large scale p=2 simulation of an aircraft with artificial viscosity to stabilize the shock formed on the wing. Two hypersonic viscous flows of a cylinder and sphere are simulated and compared to the NASA code LAURA. The solution matches closely to LAURA and the shock becomes more resolved as the polynomial degree is increased. The heating rate on the surface matches closely to LAURA at p=3. In the case of turbulent flows the Reynolds Averaged Navier-Stokes (RANS) equations are solved. The new negative-Spalart-Almaras model is implemented and used to solve turbulent flow over a NACA 0012 wing, RAE2822 wing, and a multi-element 30P30N wing. Finally, the parallel scalability is tested and good speed up is obtained using up to 2048 processor cores. As the polynomial degree increases the scalability improves. Although, an ideal speedup was not shown this was contributed to load balancing. These simulations demonstrate the capability of the DG solver to handle strong shocks, RANS, complex geometry, hp-adaption, and parallel scalability.

This is joint work with Dimitri J. Mavriplis.

Figure. M=17.605, Re=376,930 flow over a cylinder, contours of artificial viscosity (left) and contours of Mach number (right)]]>

Title. Universal computation by multi-particle quantum walk in 2D

Abstract. In this talk we discuss a model consisting of

Title. Additivity (or not) of the Fixed Point Property

Abstract. Let each of X, Y, and X intersect Y be a continuum with the fixed point property (fpp).We say that "the fpp is additive for X and Y" if X union Y has the fpp. If G is some class of continua with the fpp, we say that "the fpp is additive for G" provided that whenever X, Y, and X intersect Y are in G, the fpp is additive for X and Y.

Question. For what classes G of continua is the fpp additive?

We discuss the history of this question, reviewing both positive and negative results. We end with recent examples of Hagopian and Marsh that show the fpp is not additive for the class of tree-like continua.]]>