Digital Signal Processing in Cosmology

What is Cosmology?

Cosmology basically is a branch of astronomy which is concerned with the studies of the origin and evolution of the universe, from the Big Bang to today and on into the future. There are three types or branches of cosmology namely as :

Physical Cosmology, Religious or Mythological Cosmology and Philosophical Cosmology.

Source : Google Images

Physical Cosmology :

Physical Cosmology is the scientific study of the universe’s origin, its large-scale structures and dynamics, and its ultimate fate , as well as the laws of science that govern these areas. Modern physical cosmology is dominated by the Big Bang theory, which attempts to bring together observational anatomy and particle physics.

Religious or Mythological Cosmology :

Religious or mythological cosmology is defined as a body of beliefs supporting mythological , religious and esoteric literature and traditions of creation and eschatology.

Philosophical Cosmology :

Philosophical cosmology addresses the questions based on the Universe which are beyond the scope of science . It answers questions associated with monism, pantheism and creationism.

Now let’s have a brief look at Digital Signal Processing in Cosmology.


Many cosmological applications depend on the utilization of Fast Fourier Transform (FFT) techniques. Their low computational complexity allows to process huge datasets in reasonable times.

In cosmology some frequently used techniques to discretize a continuous distribution of point particles are Nearest Grid Point (NGP), Cloud In Cell (CIC) or Triangular Shaped Clouds (TSC).

The process of sampling by relying on these filter approximation generally reduces the information content of the continuous signal to that one represented by a group of discrete points. It also introduces sampling artifacts like aliasing or Gibbs ringing.

The aim for signal processing technologies is to search out for low-pass filter approximations which sufficiently suppress sampling artifact and being computationally more cost effective than the ideal sampling procedure.


FFT is a valuable tool in the processing of large cosmological datasets due to its computational efficiency. This also allows us to systematically apply many mathematical operations such as convolutions or deconvolution of data. However, the use of FFT depends on two strong requirements as the Function has to be discrete not only in real-space but also in Fourier-space.

It is a well known fact that a natural physical signal like the galaxy distribution is neither living in a exceedingly discretized real nor a discretized Fourier-space. The reduction of such a signal to a set/group of finite and discrete sample points, thus, introduces an infinite loss of information.

Source : Google Images

A necessary requirement for the application of FFTs is that the real-space discreteness of a signal. The process of converting a continuous signal into a discrete representation can cause several artifacts like aliasing or Gibbs ringing. A good way to understand, analyze and eliminate these artifacts is through Fourier analysis.

There are necessary requirements for real-space discretization based on Sampling Theorem, Low Pass Filtering and Sampling on a finite real-space domain.

  1. Sampling Theorem:

When a signal is being sampled it must be band-limited if we are to recover its information content correctly.

Also, The sampling frequency must be greater than twice the maximum frequency present within the signal.

The first condition just ensures that a sufficiently large sampling frequency exists that may be used to separate replicated spectra from one another . However ,the second condition provides a solution to the problem of the sufficiency of data samples to exactly recover the continuous input signal.

Source :

2. Low Pass Filtering :

Natural signals aren’t generally band-limited, and so must be low-pass filtered before they’re sampled. The sampling operation must include some form of local averaging. It is therefore required to apply/use a filter to the signal, which cuts away all Fourier modes above a certain maximal frequency. Such a filter is called an ideal low pass filter.

It has unity gain with-in the pass-band region, hence not introducing any attenuation of the Fourier modes to pass, which it perfectly suppresses all the power in the stopband region.

Source :

3. Sampling on a finite real-space domain :

Samples are distributed over the infinite domain of real-space as the sum with-in the sampling operator which extends from minus to plus infinity. It is generally not possible to evaluate the infinite sampling sum with a computer.

In general it is inconceivable to uniquely restore information from signals which are sampled on finite domains, and interpretations drawn from these sampled signals will always be afflicted with some uncertainty.


In the discretization of real space, we see that a discretizing the real-space representation of a continuous signal. This is often demonstrated in real-world applications due to the finite sampling operator so the sampling theorem isn’t applicable anymore in a very strict sense.

However, discretizing the real-space representation of the signal is typically not enough for data processing purposes .For instance it can be easily shown that the Fourier transform of such a discrete signal is still a continuous function in Fourier-space. However, applications in which FFT techniques are used also require discrete Fourier-space representations of the signal.

  1. Sampling in Fourier-space :

Fourier representation is obtained by applying FFT or by DFT on a normally low pass filtered non-periodic signal. So which deviates from true Fourier representation by compromising with the mode coupling function. Therefore, the continuous signal should be filtered in such a way that its Fourier representation contains a discrete spectrum.

2. The instrument response function of our computer and the loss of information :

To process information via FFT techniques on a computer, the function needs to be band-limited and discrete in Fourier-space. So this suggests that the function has to be periodic on the observational interval and can be represented by only a finite amount of Fourier waves. As natural signals, in general, don’t obey these requirements. So processing the data with a computer has got to be understood as an additional observational and filtering step, which modifies the input data.


Ideal sampling procedure :

Source :

Ideal sampling procedure allows to have the best information conservative representation of the continuous signal in discrete form.

However, because the ideal low-pass filter kernel extends over all space, the sum over all particles must be evaluated for every individual voxel of the discrete grid, which makes this procedure impractical for real world applications.


Due to limitations of ideal sampling procedure , it is not feasible. Thus a more convenient approach could be a practical approach by approximating the ideal sampling operator, by a less accurate, but faster calculable sampling operator.

It approximates the low pass filter by a function with compact support in real-space. As a result the convolution will be calculated faster, for the prize of not completely suppressing the aliasing power with-in the stop-bands.


Super-sampling solves the aliasing problem by taking more samples in general than normal particle assignment schemes like CIC Or TSC. By taking more samples at sub-pixels, it can capture the details of a natural continuous signal more accurately. The target pixel value is then achieved by averaging the values ​​of the sub-pixels, which minimizes the aliasing edge effects in the signal. Therefore, super-sampling reduces aliasing by band-limiting the input signal and using the fact that the signal content is usually reduced when the frequency increases.

  • Super resolution and down-sampling :

The super-sampling method consists of two main steps, the super-sampling step, in which the signal is sampled at high resolution, and the down-sampling process, in which the high-resolution samples are sampled to the target resolution. It makes use of FFTs to allow for pass-band attenuation correction and fast and efficient calculation of the overall super-sampling method. This two-stage filtering process is shown in Figure, and involves the following steps:

Fig. Two stage sampling scheme Source:

(i) Super-sampling: The continuous signal is sampled to a grid with a resolution n times larger than the target resolution. This can be achieved by applying the CIC or TSC method to allow for fast and efficient computation of the high-resolution samples.

(ii) Down-sampling: The high-resolution samples are correct for pass-band attenuation and low-pass filtered. The high-resolution low-pass filtered samples are then resampled at the target resolution.


Few of the famous applications of cosmology are :

  1. Estimating the power spectrum from galaxy surveys.
Source :

2. Calculating the gravitational potential of the dark matter distribution in numerical simulations

Authors :

Harshad Dabhade

Ruturaj Deshmukh

Dhanesh Manwani

Ameya Gandhe

Saurabh Ghodake