The Laue Lens Library, a python code for simulation of Laue lenses for Astrophysical and Terrestrial applications

Focusing X and Gamma rays from astronomical sources represents a step forward for the next generation telescopes that act in the X/gamma ray energy range. To date, astrophysical observations above 80 keV have been only performed by direct-viewing instruments whose sensitivity is limited by the background. Many astrophysical issues are still open in the 70 – 600 keV energy range. A new generation of broad band X-ray focusing telescopes is in a mature stage of development, that will extend the operative energy up to several hundreds of keV. The most efficient technique to focus photons above 80 keV is the Bragg diffraction from crystals, in the transmission configuration (Laue case). For the design and the expectation of a possible Laue lens for space applications a python code with a number of libraries have been developed. With the Laue Lens Library (LLL) we are capable toentirely simulate the performances of a Laue lens in terms of collecting and effective area, telescope sensitivity and imaging capability through the investigation of the achieved PSF. In particular, the code describes the source of radiation (both point like source or extended source placed at finite distance or at infinite distance as in the astrophysical case) in terms of emitted intensity and spectral shape.
Principal investigator: Dott. Enrico Virgilli virgilli@fe.infn.it

Gamma-ray burst broadband afterglow modelling based on hydrodynamics simulations with BOXFIT

Gamma-ray bursts (GRBs) are cosmic explosions connected with the death of massive stars and are among the most energetic and furthest events we currently observe in the Universe. These explosions are observed as second-to minute long flashes of gamma-rays. In the aftermath of a GRB, a long-lived (days to months) decaying source is visible at longer wavelengths, from X-rays down to radio, is the result of the blast wave sweeping up the interstellar medium. This source is called afterglow. The GRB group of the physics dept. has been collaborating with several international groups actively involved in collecting and interpreting GRB broadband afterglow data. Our group mainly collaborates and shares data with the GRB group of the Astrophysics Research Institute of the Liverpool John Moores University (United Kingdom). Recently, Van Eerten and collaborators developed a powerful tool for fitting such data sets, which can be used to constrain the GRB explosion parameters through high-resolution two-dimensional relativistic hydrodynamical (RHD) simulations. This code is named BOXFIT and can be retrieved at the following website (see references below) and the details are reported in a devoted refereed paper (Van Eerten et al. 2012). The dynamics of the afterglow blast wave have been calculated in a series of 19 high-resolution two-dimensional simulations performed with a specific RHD code. The results of these calculations have been compressed and stored in a series of 'box' data files and BOXFIT calculates the fluid state for arbitrary fluid variables using interpolations between the data files and analytical scaling relations. End-users of BOXFIT do not need to perform RHD simulations themselves. It can be used both in parallel and on a single core, but the specific task of data fitting requires the parallel configuration. We have succesfully compiled and run the code under a linux machine running gcc v.4.4.4 both 32 and 64 bit.
Principal investigator: Dott. Cristiano Guidorzi guidorzi@fe.infn.it

A GPU and mixed CPU-GPU solution of large-scale imaging problem

This project is aimed to study and develop an efficient GPU-based parallel implementation of first-order numerical optimization methods for some classes of nonlinear mathematical programming (NLP) problems, particularly relevant for the solution of large-scale discrete inverse problems.
These NLPs are generally characterized by an objective function, which can be even strongly nonlinear, and by quite simple constraints, such as box and/or linear constraints. In a large number of applications, the problems sizes are an issue, so fist-order methods often provide an helpful tool to face the large computational complexity. Despite the classical, well known subotimal convergence rate of gradient-based methods, recent progresses have allowed to tailor acceleration strategies which greatly improve their speed, thus making them competitive with second-order methods and other complicated techniques. One of the most successful of these first-order approaches is the scaled gradient-projection method (SGP), accelerated by suitable choices of the scaling matrix and of the projection steplength [1].
We are interested in applying this approach to image and volume recosntruction problems, mainly in the denoising and deblurring of Poisson-noised data. These problems arise in a wide variety of contexts, such as Astronomy, Medicine, Microscopy, Engineering, Geophysics, and others. As it is well knwon, this kinds of problems are ill-posed (in the Hadarmad sense) and often show ill-conditionning, so their direct solution is unpracticable. Regularization techniques are then the only viable way to recover a meaningful solution: once mathematically stated, these techqniques provide NLPs of the described form, that can thus be faced by the SGP method.
Principal investigator: Dott. Gaetano Zanghirati gaetano.zanghirati@unife.it

FOCUS: Fdtd On CUda Simulator

The Finite Difference in the Time Domain (FDTD) is a numerical technique commonly used in electromagnetics to solve complex problems where, due to the complexity of the geometries and/or of the dielectric properties of the materials located inside the simulation scenario, an analytical solution cannot be found. Finding an analytical solution for a complex problem, in fact, usually requires the introduction of strong simplifications, that can make the final results unusable.
Principal investigator: Dott. Gaetano Bellanca gaetano.bellanca@unife.it

GPU Monte Carlo

The research project aims to develop/port a Monte Carlo code on the hybrid server, to study the radiation-matter interaction. Homemade codes were already developed and are currently used by the research group to simulate the acquisition systems of radiography applications and Nuclear Medicine. In particular, we are going to work on the code EGSNRC++, developed at the Ottawa National Research Council (Canada), which is used for the simulation of the matter interaction with photons and electrons/positrons. The goal is to reduce the time necessary to perform the Monte Carlo simulations of detection systems for applications of medical interest, such as CT, PET, SPECT, which need the generation of several tens of billions of particles to obtain a realistic result.
Principal investigator: Dott. Giovanni Di Domenico didomenico@fe.infn.it

Experimental Medicine EM

At the Department of Clinical and Experimental Medicine, Section of Nuclear Medicine, a prototype of a tomograph scanner for small animals is installed, which combines the modes SPECT, PET and CT. The processing techniques of tomographic data acquired for the images reconstruction, possibly by jointly using the data from both the emissive and the transmission modes, require high processing power and storage for the matrices describing the system, that are not always sparse. In particular, for the analysis of emissive images, we are interested in the implementation of Expectation Maximization Maximum Likelihood (EM-ML) techniques with penalty, introducing the scatter correction in the matrix that describes the problem. This would allow to reconstruct the data acquired with high-energy-photons emitting radioisotopes, such as 123-I or 188-Re, which add a background signal to sinograms: this is the signal that has to be "stolen" for a suitable data reconstruction, making semi-quantitative or quantitative analysis possible.
Principal investigator: Dott. Giovanni Di Domenico didomenico@fe.infn.it

Inference for logical probabilistic languages

Probabilistic logic languages allow to model domains where there are complex and uncertain relationships between entities. Recently, several probabilistic logic languages have been proposed in the field of logic programming. The problem of reasoning with these languages is of great interest. In particular, in this project we will calculate the probability of a query given a program. The hybrid server will be used to measure the time taken by different algorithms for this problem. Algorithms developed by the group will be tested and compared with algorithms developed by other researchers. The large memory capacity of the server will allow testing the algorithms on problems of significant size.
Principal investigator: Dott. Fabrizio Riguzzi rzf@unife.it

Learning logical probabilistic languages

Probabilistic logic languages are particularly useful in machine learning problems where the data are characterized by uncertainty and complex relationships between the entities of interest. The learning algorithms proposed for these languages have proven to be particularly effective on a wide range of datasets. In this project we will develop algorithms for learning probabilistic logic languages based on logic programming. The hybrid server will be used to test the algorithms developed by the group and compare them with state of the art algorithms. We will measure the accuracy and the speed of the algorithms. The large memory capacity of the server will allow testing the algorithms on large datasets.
Principal investigator: Dott. Fabrizio Riguzzi rzf@unife.it