A DC electrical signal is constant and has no oscillations; a plane wave propagating parallel to the optic axis has constant value in any x - y plane, and therefore is analogous to the constant DC component of an electrical signal. Bandwidth in electrical signals relates to the difference between the highest and lowest frequencies present in the spectrum of the signal. For optical systems, bandwidth also relates to spatial frequency content spatial bandwidth , but it also has a secondary meaning.
It also measures how far from the optic axis the corresponding plane waves are tilted, and so this type of bandwidth is often referred to also as angular bandwidth. It takes more frequency bandwidth to produce a short pulse in an electrical circuit, and more angular or, spatial frequency bandwidth to produce a sharp spot in an optical system see discussion related to Point spread function.
The plane wave spectrum arises naturally as the eigenfunction or "natural mode" solution to the homogeneous electromagnetic wave equation in rectangular coordinates see also Electromagnetic radiation , which derives the wave equation from Maxwell's equations in source-free media, or Scott . In the frequency domain , with an assumed engineering time convention of , the homogeneous electromagnetic wave equation is known as the Helmholtz equation and takes the form:. In the case of differential equations, as in the case of matrix equations, whenever the right-hand side of an equation is zero i.
Common physical examples of resonant natural modes would include the resonant vibrational modes of stringed instruments 1D , percussion instruments 2D or the former Tacoma Narrows Bridge 3D.
Ottica di Fourier | SpringerLink
Examples of propagating natural modes would include waveguide modes, optical fiber modes, solitons and Bloch waves. Infinite homogeneous media admit the rectangular, circular and spherical harmonic solutions to the Helmholtz equation, depending on the coordinate system under consideration.
The propagating plane waves we'll study in this article are perhaps the simplest type of propagating waves found in any type of media. There is a striking similarity between the Helmholtz equation 2. The interested reader may investigate other functional linear operators which give rise to different kinds of orthogonal eigenfunctions such as Legendre polynomials , Chebyshev polynomials and Hermite polynomials.
In the matrix case, eigenvalues may be found by setting the determinant of the matrix equal to zero, i. In certain physics applications such as in the computation of bands in a periodic volume , it is often the case that the elements of a matrix will be very complicated functions of frequency and wavenumber, and the matrix will be non-singular for most combinations of frequency and wavenumber, but will also be singular for certain specific combinations.
By finding which combinations of frequency and wavenumber drive the determinant of the matrix to zero, the propagation characteristics of the medium may be determined. Relations of this type, between frequency and wavenumber, are known as dispersion relations and some physical systems may admit many different kinds of dispersion relations. An example from electromagnetics is the ordinary waveguide, which may admit numerous dispersion relations, each associated with a unique mode of the waveguide.
Each propagation mode of the waveguide is known as an eigenfunction solution or eigenmode solution to Maxwell's equations in the waveguide. Free space also admits eigenmode natural mode solutions known more commonly as plane waves , but with the distinction that for any given frequency, free space admits a continuous modal spectrum, whereas waveguides have a discrete mode spectrum. In this case the dispersion relation is linear, as in section 1.
The notion of k-space is central to many disciplines in engineering and physics, especially in the study of periodic volumes, such as in crystallography and the band theory of semiconductor materials. Note : the normalizing factor of: is present whenever angular frequency radians is used, but not when ordinary frequency cycles is used.
An optical system consists of an input plane, and output plane, and a set of components that transforms the image f formed at the input into a different image g formed at the output. The output image is related to the input image by convolving the input image with the optical impulse response, h known as the point-spread function , for focused optical systems.
The impulse response uniquely defines the input-output behavior of the optical system. By convention, the optical axis of the system is taken as the z -axis. As a result, the two images and the impulse response are all functions of the transverse coordinates, x and y. The impulse response of an optical imaging system is the output plane field which is produced when an ideal mathematical point source of light is placed in the input plane usually on-axis.
In practice, it is not necessary to have an ideal point source in order to determine an exact impulse response. This is because any source bandwidth which lies outside the bandwidth of the system won't matter anyway since it cannot even be captured by the optical system , so therefore it's not necessary in determining the impulse response. The source only needs to have at least as much angular bandwidth as the optical system. Optical systems typically fall into one of two different categories. The first is the ordinary focused optical imaging system, wherein the input plane is called the object plane and the output plane is called the image plane.
The field in the image plane is desired to be a high-quality reproduction of the field in the object plane. In this case, the impulse response of the optical system is desired to approximate a 2D delta function, at the same location or a linearly scaled location in the output plane corresponding to the location of the impulse in the input plane. The actual impulse response typically resembles an Airy function , whose radius is on the order of the wavelength of the light used. In this case, the impulse response is typically referred to as a point spread function , since the mathematical point of light in the object plane has been spread out into an Airy function in the image plane.
The second type is the optical image processing system, in which a significant feature in the input plane field is to be located and isolated. In this case, the impulse response of the system is desired to be a close replica picture of that feature which is being searched for in the input plane field, so that a convolution of the impulse response an image of the desired feature against the input plane field will produce a bright spot at the feature location in the output plane.
It is this latter type of optical image processing system that is the subject of this section.
Section 5. The input image f is therefore. The output image g is therefore. The alert reader will note that the integral above tacitly assumes that the impulse response is NOT a function of the position x',y' of the impulse of light in the input plane if this were not the case, this type of convolution would not be possible. This property is known as shift invariance Scott .
No optical system is perfectly shift invariant: as the ideal, mathematical point of light is scanned away from the optic axis, aberrations will eventually degrade the impulse response known as a coma in focused imaging systems. However, high quality optical systems are often "shift invariant enough" over certain regions of the input plane that we may regard the impulse response as being a function of only the difference between input and output plane coordinates, and thereby use the equation above with impunity. Also, this equation assumes unit magnification. If magnification is present, then eqn.
The extension to two dimensions is trivial, except for the difference that causality exists in the time domain, but not in the spatial domain. Obtaining the convolution representation of the system response requires representing the input signal as a weighted superposition over a train of impulse functions by using the sifting property of Dirac delta functions.
It is then presumed that the system under consideration is linear , that is to say that the output of the system due to two different inputs possibly at two different times is the sum of the individual outputs of the system to the two inputs, when introduced individually. Thus the optical system may contain no nonlinear materials nor active devices except possibly, extremely linear active devices.
The output of the system, for a single delta function input is defined as the impulse response of the system, h t - t'. And, by our linearity assumption i. This is where the convolution equation above comes from. The convolution equation is useful because it is often much easier to find the response of a system to a delta function input - and then perform the convolution above to find the response to an arbitrary input - than it is to try to find the response to the arbitrary input directly.
Also, the impulse response in either time or frequency domains usually yields insight to relevant figures of merit of the system. In the case of most lenses, the point spread function PSF is a pretty common figure of merit for evaluation purposes. The same logic is used in connection with the Huygens—Fresnel principle , or Stratton-Chu formulation, wherein the "impulse response" is referred to as the Green's function of the system. So the spatial domain operation of a linear optical system is analogous in this way to the Huygens—Fresnel principle.
The system transfer function,. In optical imaging this function is better known as the optical transfer function Goodman. Once again it may be noted from the discussion on the Abbe sine condition , that this equation assumes unit magnification. This equation takes on its real meaning when the Fourier transform, is associated with the coefficient of the plane wave whose transverse wavenumbers are.
Thus, the input-plane plane wave spectrum is transformed into the output-plane plane wave spectrum through the multiplicative action of the system transfer function. It is at this stage of understanding that the previous background on the plane wave spectrum becomes invaluable to the conceptualization of Fourier optical systems. Fourier optics is used in the field of optical information processing, the staple of which is the classical 4F processor. The Fourier transform properties of a lens provide numerous applications in optical signal processing such as spatial filtering , optical correlation and computer generated holograms.
Fourier optical theory is used in interferometry , optical tweezers , atom traps , and quantum computing. Concepts of Fourier optics are used to reconstruct the phase of light intensity in the spatial frequency plane see adaptive-additive algorithm.
If a transmissive object is placed one focal length in front of a lens , then its Fourier transform will be formed one focal length behind the lens. Consider the figure to the right click to enlarge. In this figure, a plane wave incident from the left is assumed. The transmittance function in the front focal plane i.
- Accused (Rosato & DiNunzio, Book 1);
- Transistors Applied.
- Shop by category.
- Physics / Wave Motion and Optics.
- Ottica di Fourier | SpringerLink.
The various plane wave components propagate at different tilt angles with respect to the optic axis of the lens i. This scheme was realized with the speckle suppressor located at the focus position of the same stack of 71 Be O30H lenses at Correct simulation of the intensity of X-rays passing through porous nanoberyllium is complicated due to the difficulty associated with specifying the appropriate parameters.
An example of a software program which allows the insertion of defects in the lens is SRW Chubar et al. Porous nanoberyllium has a density of 0. To simulate a plate with 0. Such a task would be impossible even with the help of supercomputers. Because of the amount of calculations involved, and in order to enable the simulation of a porous nanoberyllium plate, its model, while retaining all the required properties, must be simplified. The most efficient way of solving this problem is to create an appropriate statistical model.
Such a sample can be simulated by dividing the portion of the considered material into smaller pieces and making a random choice, so that there are both empty spaces and matter. Obviously, the size of these parts should be properly linked with the size of the pore. For porous nanoberyllium, whose cavities have a diameter of 0.
This is reasonable, since, statistically, if we consider smaller-sized elements, those which are located close to each other will have a greater impact on the changes in the X-ray intensity than would be indicated by the size of the element. Another important issue that must be taken into account in the calculations is the selection of an appropriate model, which would enable the calculation of the X-ray intensity inside the porous beryllium material, and in the air after X-rays pass through the material. For this reason, the simulation scheme was based on the finite difference method, which does not require continuity or differentiability from the data.
Propagation of the X-ray waves through optical objects is described by. The developed method is built by means of discretization of equations of monochromatic electromagnetic wave propagation 1 with respect to spatial variables, which are perpendicular to the optical axis of the optic system. Approximate ordinary differential equations resulting from the discretization of the system of ordinary differential equations are solved with the help of the implicit Runge—Kutta method of the second order, combined with an iterative procedure.
The simulation was carried out for a plate of size 0. To obtain a model of the beryllium plate without pores we suggest that all the cubes are composed entirely of beryllium. To obtain the distribution of beryllium in the plate for a one-step numerical procedure along the x axis, such a selection should be repeated independently times.
If the average density distribution of beryllium in the plate is known, it is possible to determine the values of the function B for each spatial step of the numerical procedure. In order to obtain results for the entire plate, 50 steps of the numerical procedure need to be performed. Of course, the selection of elements for each of these steps must be performed independently. Such a model of porous nanoberyllium gives X-ray scattering of the order of 0. Inclusion of smaller components increases the average angular scattering, and larger ones cause a reduction of scattering. Due to large scattering it is not possible to carry out calculations at large distances from the beryllium plate, because this involves increasing the area of computation that must be taken into account, and inevitably lengthens the computation time.
Along with the increase of the distance from the beryllium plate, the number of steps in the finite difference method also grows. This, in turn, reduces the accuracy of the results, and is the reason why this method is only suitable for achieving statistical results.
Ottica di Fourier
The possibility of accurate X-ray imaging for porous plates of nanoberyllium is a serious mathematical problem in need of further investigation. The scattering of X-ray waves of initial energy A speckle suppressor device based on highly porous nanoberyllium was applied for manipulating the spatial coherence length by increasing the effective source size and by the transformation of the contrast mechanism during the imaging experiments. From the experiments performed the optimal position of the speckle suppressor was defined: for X-ray projection microscopy the device has to be placed exactly at the secondary source position; for full-field microscopy with lenses it has to be located in the lens imaginary focus; and for radiography schemes the speckle suppressor has to be positioned just in front of the object.
Higher-energy applications of the speckle suppressor require an increase in the nanoberyllium material density and a reduction of the average porous size. Special thanks are due to C. Detlefs and P. Wattecamps from the ID06 for their support during the experiments. We also wish to express gratitude to the Xscitech trading company for manufacturing and providing us with the first version of the speckle suppressor device.
National Center for Biotechnology Information , U. J Synchrotron Radiat. Published online Apr 9. The effect of the perturbation is to increase or decrease the intensity at different parts of the OFT, as shown in Fig. The difference between these two images encodes the phase. The perturbation is indicated by the visible beam. The optical system is performing a true continuous Fourier transform albeit of a sampled input , while we are interested in performing the discrete Fourier transform.
Thus we need to sample the OFT appropriately. An ideal sensor array would implement Dirac comb sampling with zero-size pixels. However, this is clearly not practical. We tend towards this case by over-sampling the OFT with the camera sensor and selecting a sparse subset of the sensor pixels to represent the DFT values. These are shown as green crosses overlaid with the difference due to the perturbation in Fig.
In a competitive implementation of this technology, a custom sparse sensor array should be implemented. Hence, our output will be modulated by a sinc envelope due to the pixel shape. This envelope is measured experimentally, as the effective pixel-shape which leads to apodization of the Fourier transform in the two-pass system depends on alignment. The appropriate correction function is shown in Fig. There is good agreement between the optical measurement and the computed values. Note that these results are unprocessed and do not include any error-reduction techniques.
A classic method to numerically simulate such a system is to exploit the fact that in the Fourier domain the heat equation becomes.
- Who Are You? What Do You Want?: Four Questions That Will Change Your Life;
- Quantitative Business Valuation.
- Korchnois Chess Games.
- The lyrical in epic time : modern Chinese intellectuals and artists through the 1949 crisis!
A simple implicit finite difference method can be used to model the system evolution as. By evaluating in Fourier space, instead of calculating a computationally costly differential each timestep we evaluate a cheap multiplication instead. However, to visualise the progress of this simulation we need to use the FFT, which would become computationally costly at high resolutions. Moreover, if we were considering a non-linear differential equation we would be obliged to use a FFT each timestep in order to evaluate the non-linear terms.
Hence, it is compelling to consider using the optical Fourier transform to visualise the progress of such a simulation. This requires performing a set of complex to real as temperature is a real quantity Fourier transforms. We do this using the full method shown in Fig. A low-resolution demonstrating of using the OFT to visualise the progress of one such simulation is shown in Fig. The initial conditions for the simulation are shown in Fig. An example of the stages of a complete complex-to-complex OFT is shown schematically in Fig.
The function is decomposed into its even and odd components, which are displayed on the SLM and the power spectrum obtained. The phase of these optical Fourier transforms is constrained by the symmetry of the functions. A perturbation is applied to determine the phase. The discrete Fourier transform, which happens to be real, is then reconstituted from these components. Subsequent results at different timesteps are shown in Fig.
It is clear that there is in general a good recovery of the overall result and the phase determination method is working well. The error observed is the result of noise during acquisition and imperfect optical modulation. Using the OFT to visualise the progress of a Fourier domain heat equation simulation.
Note all equivalent quantities are evaluated with the same normalised units. The complex input function phase not shown is decomposed into real and imaginary, then even and odd components. These components all have binary phase. These components can then be added together to form first the Fourier transform of the real and imaginary components, and then the Fourier transform of the entire function which happens to be real.
The Fourier representations inputs have been DC balanced, and the central order DC term of the direct output representation must be ignored due to system imperfections. Digital electronic computers are a prodigiously successful technology, and the fast Fourier transform is a ubiquitous and highly optimised algorithm Nonetheless, there is potentially utility in an optical Fourier transform coprocessor. Performing extremely high resolution 2D Fourier transforms can often represent a computational bottleneck.
As well as offering improved performance, coherent optical processing offers the potential for significant improvements in power efficiency as only the input and output components consume power. However, in order for such a system to prove useful, a sufficiently large number of pixels must be marshalled. Moreover, an analogue optical processor of this nature cannot rival the precision and accuracy offered by a digital electronic computer.
The maximal accuracy which could be obtained in a single calculation is likely significantly less than 1 byte 8 bits. Higher precision transforms would require computing lower precision transforms and combining them, exploiting the linearity of the Fourier transform. However, this would require high confidence in the given accuracy, in particular of the transforms representing the more significant bits. In order to compete, the advantage offered in terms of resolution and execution time must be overwhelming compared to the digital alternative. Meanwhile, the difficult-to-parallelise, highly connected problem of the Fourier transform every point in the output depending on every point in the input is implemented optically.
Consequently many of the amplitude levels are towards the bottom of the intensity scale. A non-linear intensity response in an appropriately-corrected camera—essentially a specific hardware gamma correction—could reduce this. Furthermore, achieving accurate high-resolution Fourier transforms places high demands on the optical design required to achieve a well-corrected distortion- and aberration-free Fourier transform. As with all analogue processing systems one must contend with the fact that precision is limited by inherent physical noise.
While higher quality components can be used to reduce its effect, in a coherent system such as this optical speckle is a significant and inevitable source of noise This can be reduced through diverse measurements, for example by taking the OFT of the same function rotated, translated, and scaled on the SLM and averaging the results. Precision can be increased by combining the linear FFTs across different levels. Furthermore, for a given precision input, higher precision computation happens naturally in the analogue optical domain.
The requirements on optical design in order to produce a well-corrected optical relay and a distortion-free optical Fourier transform are within the realms of conventional optical design However, as higher performance, high resolution, liquid-crystal on silicon LCOS devices are used, these tend to come with smaller pixels and the optical design issues become more challenging 22 , In particular, modern LCOS devices often have little inter-pixel deadspace, meaning that there is significant cross-talk in an optical relay in a two-SLM system.
Both this requirement of well-isolated pixels, and that of small-size camera pixels would potentially require custom hardware development. Assuming that a digital electronic system can be built to drive the electro-optics to their full capacity, the factors limiting system performance are the hardware resolution and system refresh time. Greater than megapixel camera sensors are now available, and perhaps making use of either physically or virtually tiled SLMs input resolutions of this order could be achieved with commodity SLMs. Thus, it can be seen that competitive performance is already achievable without significant input and output transducer development, although achieving sufficient system bandwidth is a challenge.
This could be facilitated by tiling camera and SLM panels, notwithstanding some significant engineering challenges. It should be noted that this bandwidth limitation is not unique to an optical coprocessor, but a general issue with coprocessors. While this potential performance is compelling, it must be acknowledged that existing computational technologies are prodigious and rapidly advancing. It is challenging to be competitive within this context. However, this method of performing an optical Fourier transform does benefit from an attractive scaling regime compared to other potential implementations, arising from the fact that the Fourier transform itself is implemented naturally in free space.
Hence, it could very well prove attractive at extremely high resolutions. However, developing a very high resolution system comes with significant engineering hurdles. Not only is providing sufficient input and output bandwidth challenging, the optical design is indeed very demanding. We would contend, though, that the critical advantage of this technology is the scaling regime which it exhibits. Only the input and output bandwidth need scaling, the intermediate computational substrate does not.
The challenge of providing sufficient interconnection in, for example, an electronic or optical integrated circuit to perform extremely high resolution Fourier transforms is considerable. Indeed, it is at exceptionally high resolutions that the benefits of this method become significant, rendering the trade-offs less relevant.
It is conceivable to evaluate 2D Fourier transforms at a scale and rate untenable with other methods and platforms.