## Contents |

Suppose we would be able to unleash the power of the quantum world in ways which would have been unthinkable only a few years ago. For instance, we could use quantum superposition, the possibility for a quantum bit to contain all conceivable and mutually excluding classical states in itself. Then, in a single computational step, we could realize the parallel processing of all these classical states, whose number grows exponentially with the number of classical bits involved, through the quantum state evolution of this single state. That is the vision of quantum parallelism, which is one of the driving forces of quantum computing, and at the same time one of the fastest growing areas of research in the last decade or so. These strategies have all been made possible with new techniques capable to produce, manipulate, and detect single quanta, such as photons, neutrons and electrons.

Suppose someone claims that the chances of rain in Vienna and Budapest are 0.1 in each one of the cities alone, and the joint probability of rainfall in both cities is 0.99. Would such a proposition appear reasonable? Certainly not, for even intuitively it does not make much sense to claim that it rains almost never in one of the cities, yet almost always in both of them. The worrying question remains: which numbers could be considered reasonable and consistent? Surely, the joint probability should not exceed any single probability. This certainly appears to be a necessary condition, but is it a sufficient one? Boole, and much later Bell - already in the quantum mechanical context and with a specific class of experiment in mind - derived constraints on the classical probabilities from the formalization of such considerations. In a way, these bounds originate from the conception that all classical probability distributions are just convex sums of extreme ones, which can be characterized by two-valued measures interpretable as classical truth values. They form a convex polytope bounded by Boole-Bell-type inequalities.

Fig.: Quantum cryptography using single-photon sources. (copyright) http://www.epfl.ch

Remarkable, quantum probability theory is entirely different from classical probability theory, as it allows a statistics of the joint occurrence of events which extends and violates Boole's and Bell's classical constraints. Alas, quantum mechanics does not violate the constraints maximally, quantum bounds fall just "in-between" the classical and maximal bounds.

In recent years there has been increasing interest in the control and manipulation of atomic wave functions. The engineering of wave functions promises applications in many areas of physics, such as quantum computing, promotion of chemical reactions towards any preferable direction, or optimization of high harmonic generation. Theoretically any wave function can be formed as a coherent superposition of energy eigenstates. In practice, however, it is not an easy task to prepare a preselected target state experimentally. Thus there are increasing demands for establishing protocols to produce any preferable designer state starting from the states which are experimentally accessible. Recently a few protocols have been suggested to create and manipulate a wave packet.

Fig. 1: Poincaré surface of section for the periodically kicked atom by a train of kicks with *v* = 1.095 and Δ*p* = − 0.1. A periodic orbit (blue dashed line) is located
at the center (green cross) of main stable island (red) in the Poincaré surface. The
upper frame explains graphically how the periodic orbit can be stabilized.

A Rydberg wave packet is a coherent superposition of highly excited atomic states, localized in phase space. Due to the relatively large time and spatial scale (*t* ~ *n*^{3} and *r* ~ *n*^{2}) of Rydberg atoms with quantum number n, Rydberg wave packets are known to be among the best explored quantum objects which approximately follow the dynamics of the corresponding classical particle and serve as benchmark for probing the crossover between classical and quantum dynamics. With recent advances in ultra-short pulse generation it has become possible to engineer wave packets using Rydberg atoms . Using such a Rydberg wave packet as the initial state, we have demonstrated a few protocols to steer such a Rydberg wave packet towards any preferable location in phase space or to manipulate the size of a wave packet using a train of short pulses, so-called half-cycle pulses (HCPs).

The response of Rydberg atoms to a train of identical HCPs equispaced in time has been studied extensively revealing a wide variety of dynamical behaviors. Under the influence of a periodical train of kicks, the electron experiences a random sequence of energy transfers Δ*E* leading to a random-walk behavior in energy space. On the other hand, by tuning the frequency of a train of kicks near the Kepler orbital frequency and setting the kick strength Δ*p* = − 2*p* to satisfy Δ*E* = 0, the motion of the electron can be synchronized with the periodic train and stabilized without any energy transfer. This motion is analogous to a tennis ball (electron) hitting a wall (nucleus as a scatterer). At each hit (kick) by a racket the tennis ball changes only its direction of motion *i.e.* *p*_{after} = *p*_{before} + Δ*p* = − *p*_{before} when Δ*p* = − 2*p*_{before}. By hitting a ball with a proper frequency a periodic motion can be established. This idea of dynamical stabilization has been used to create a wave packet localized in phase space. The main stable island (red) seen in the Poincaré surface of section is a manifestation of a periodic motion and quasi-periodic trajectories surrounding it. Classical trajectories inside the island are kept trapped as long as a train of kicks is applied and the trajectories outside are spread out over whole phase space and eventually this unbounded motion (blue chaotic sea) leads to ionization. Another consequence of the island structure is that parts of the quantum wave function outside the islands get trimmed off by the periodic pulse and consequently the wave packet will be well localized inside the island.

Since the first working laser device was built by Maiman in 1960, the progress in laser technology has been tremendous. The intensity of the lasers has been increased by many orders of magnitude. Intensities reach presently well above 1020W/cm2, where plasma effects as well as relativistic effects are important. In the near future, laser intensities may even reach the critical field strength to directly produce positron-electron pairs. At the same time, the length of the shortest pulses has decreased by more than 10 orders of magnitude. While the first lasers had a pulse length of some 100ms, very short pulses can nowadays be produced through mode-locking. In 1990, Zewail et al. managed to generate pulses as short as several femtoseconds, which meant that snapshots of chemical reactions could be directly taken. This opened up the field of femto-chemistry.

Fig.: Decrease of pulse duration as a function of time.

The possibility of driving an atom by a femtosecond laser as well as the usage of high harmonics generation to producing the shortest pulses presently available challenges our current understanding of the processes taking place in the atom driven by the ultrashort electric field. Two different regimes can be distinguished: the multiphoton regime (high frequency and low intensity) and the tunneling regime (low frequency and high intensity). In the multiphoton regime many experimental (see for example [2]) and theoretical studies have been performed, which have led to a fairly complete understanding of the physical processes involved. In the tunneling regime, on the other hand, recent experiments with linearly polarized lasers have shown novel and previously unexplained structures in the momentum distribution of the photoionized electrons in rare gases. The so-called "double-hump" structure in the longitudinal momentum distribution has been identified as a rescattering process for double-ionization and as the interaction between the electron and the core for single ionization.

We study the hydrogen atom driven by a linearly polarized laser field both classically and quantum mechanically. For the first approach we employ the classical trajectory Monte Carlo (CTMC) method including tunnel effects (CTMC-T). The electron is allowed to tunnel through the potential barrier whenever it reaches the outer turning point. Alternatively, the time-dependent Schrödinger equation is solved numerically by means of the generalized pseudo-spectral method. The process of detecting an electron of momentum [k\vec] can then be viewed as a projection of the wave function onto the Coulomb wave functions.

A major aim in ballistic transport theory is to simulate and stimulate experiments in the field of phase-coherent electron conductance through nano-scaled semiconductor devices. However, even for two-dimensional quantum dots ("quantum billiards") the numerical solution of the Schrödinger equation has remained a computational challenge. This is partly due to the fact, that many of the most interesting phenomena occur in a parameter regime of either high magnetic field B or small de Broglie wavelength lD.

However interesting they may be, these parameter ranges are difficult to handle from a computational point of view. This is because in the ``semi-classical regime* of small lD as well as in the ``quantum Hall regime* of high magnetic fields B, the proper description of the transport process requires a large number of basis functions. As a result, the theoretical models which are presently being employed eventually become computationally unfeasible or numerically instable.

Fig.: (a) Illustration of the conventional tight-binding discretization employed in the Recursive Green's Function Method for transport through a circular quantum dot with innite leads. Our modular approach as illustrated in (b) leads to increased efficiency in the numerical calculations.

At the Institute for Theoretical Physics an extension of the widely used Recursive Green's Function Method (RGM) was developed which can bypass several of the limitations of conventional techniques. Key ingredient of this approach is the decomposition of the scattering geometry into separable substructures ("modules") for which all the numerical procedures can be performed very effectively. All the modules are eventually connected with each other by means of matrix Dyson equations such that they span the entire scattering region. In this way we reach a high degree of computational efficiency.