Schedule - 2019-2020 | Department of Mathematics

# Riemann zeta and multiple zeta values

In this talk, we bring into perspective the famous Riemann zeta function and its natural generalization, the multiple zeta functions. We focus more on the evaluations of such objects at positive integers. The techniques used in these evaluations rely on the properties of some special functions.

Although they look rather simple, it turns out that the single and multiple zeta values play a very important role at the interface of analysis, number theory, geometry and physics with applications ranging from periods of mixed Tate motives to evaluating Feynman integrals in quantum field theory.

The talk should be accessible to non specialists and graduate students.

# Continued fractions, normality, and the difficulty of multiplying by 2

An expansion of a number is said to be normal if every finite string of digits in the expansion appears with a particular limiting frequency. For base-b expansions, the required frequency of a n-digit string should be b^{-n}. For continued fractions, the required frequency of a string is determined by the Gauss-Kuzmin statistics. It's known that certain operations preserve normality. For base-b expansions, multiplication and addition by non-zero rationals preserve normality. This is in part because the complexity of these operations in base-b is negligible. An exact notion of "low complexity operation" for continued fraction expansions has not been formulated, and even multiplication by 2 is a vastly more intricate procedure for continued fractions than base-b expansion, we nonetheless will show that multiplication and addition by non-zero rationals preserves normality for continued fraction expansions.

# Computability of thermodynamic invariants on shift spaces beyond SFTs

We consider shift maps on finite alphabet shift spaces and discuss questions concerning the computability (in the sense of computable analysis) of relevant thermodynamic invariants such as entropy, topological pressure and residual entropy. These questions have been recently studied for subshifts of finite type (SFTs) and their factors (sofic shifts) by Spandl, Hertling and Spandl, and Burr, Schmoll and Wolf. In this talk we consider possible extensions to more general classes of shift spaces including S-gap shifts, beta-shifts and bounded density shifts. Several positive computability results will be presented but we also show that for certain shifts even the entropy is not computable.

The results presented in this talk are part of an on-going collaboration with M. Burr (Clemson), S. Das (NYU) and Y. Yang (Virginia Tech).

# field models and fluids

An approach for solving interface problems is the diffuse interface theory, which was originally developed as methodology for modeling and approximating solid-liquid phase transitions in which the effectsof surface tension and non-equilibrium thermodynamic behavior may be important at the surface. The diffuse interface model describes the interface by a mixing energy represented as a layer of small thickness. This idea can be traced to van der Waals, and is the foundation for the phase-field theory for phase transition and critical phenomena. Thus, the structure of the interface is determined by molecular forces; the tendencies for mixing and de-mixing are balanced through the non-local mixing energy. The method uses an auxiliary function (so-called phase-field function) to localize the phases, assuming distinct values in the bulk phases (for instance 1 in a phase and -1 in the other one) away from the interfacial regionsover which the phase function varies smoothly.

During the talk I will present the main ideas to approximate the Cahn-Hilliard model, a classical Phase field model, introducing different numerical schemes and showing the advantage and disadvantages of each scheme. The key point is to try to preserve the properties of the original models while the numerical schemes are efficient in time.

Finally, I will show how these ideas for designing numerical schemes to approximate phase-fields models can be extended to other applications.

# Numerical Approaches and Applications for Uncertainty Quantification.

Uncertainty is inevitable in computer-based simulations. To provide more reliable predictions for the behavior of complex systems or optimal designs for the large structures, understanding and quantifying the uncertainty in simulations is critical. In this talk, we will focus on two of the main aspects of uncertainty quantification (UQ): model form UQ (backward UQ or model calibration) and application of UQ to material design. Specifically, for model form UQ, observations are available and physical constraints are incorporated into model correction process to enforce the important physical properties of the underlying system. The estimation of both model output and model parameters can be improved. For the application of UQ, we propose a robust inverse design procedure for the optimal morphology of nanoparticles in Plasmonics. Specifically, we use a global sensitivity analysis method to identify the important random variables and consider the non-important ones as deterministic, and consequently reduce the dimension of the stochastic space. In addition, we apply the generalized polynomial chaos expansion method for constructing computationally cheaper surrogate models to approximate and replace the full simulations.

# Weak mixing in infinite measure

The weak mixing notion has played an important role in the ergodic theory of finite measure-preserving transformations, and there are several interesting, and different, characterizations of this notion. In infinite measure many of these characterizations are not equivalent. Some go back to a 1963 paper of Kakutani and Parry but there are many recent ones. We will discuss these various notions including recent progress and open questions.

# Unconventional height functions in Diophantine approximation

The standard height function H(p/q)=q of simultaneous approximation can be calculated by taking the LCM (least common multiple) of the denominators of the coordinates of the rational points: H(p_1/q_1,...,p_d/q_d) = lcm(q_1,...,q_m). If the LCM operator is replaced by another operator such as the maximum, minimum, or product, then a different height function and thus a different theory of simultaneous approximation will result. In this talk I will discuss some basic results regarding approximation by these nonstandard height functions, as well as mentioning their connection with intrinsic approximation on Segre manifolds using standard height functions. This work is joint with Lior Fishman.

# Being on the academic job market: Why and How?

Searching for an academic job can be an overwhelming experience, particularly at the beginning. Nobody seems to know what to expect until it is already happening. In this talk, we will discuss how to prepare for an academic job search from mental preparedness to actual applications, interviews, and more! As someone who went through both post-doc and tenure-track job searches recently, I will be more than happy to share my experience and to answer any questions you may have.

# Million Dollar Problem - Poincare Conjecture

In this talk, we will describe the only millennium problem solved so far: Poincare Conjecture. We will talk about the background of the problem, and give an overview of Perelman's solution via Ricci Flow. In particular, without going into technical details, we will describe the Thurston's Geometrization Conjecture which gives the complete classification of 3-manifolds.