Data and images Multispectral imaging
LiDAR and Mobile Mapping
Imaging radar
Time series
MORE INFORMATION Computation Batch Processing / Stream Processing
Real Time Processing
High-performance computing (HPC)
General-purpose GPU (GPGPU)
Innovation Hub Physical space close to production
Scientific and technical personnel
Internal and external collaboration
Artificial intelligence Data science: machine learning, deep learning
Research into new architectures and models
Automated and scalable workflows

Innovation Hub

Tracasa’s commitment to innovation is part of our DNA and is reflected in our innumerable undertakings and actions. With the sole objective of making innovation a reality, encompassing our products and services, we saw the need to create the right context for its development. The creation of an innovation Hub as a multidisciplinary group of people where the company’s know-how is represented and where enriching synergies between teams and projects are produced, has greatly reduced the “lead time” and provides our projects with technological development that places them at the forefront of the market.

Additionally, it is potentially enriching to maintain close collaboration with other institutions and companies in order to create networking that benefits all; thus allowing innovation ideas to unfurl, as well as enabling access to sources of knowledge and external resources that complement our own. In this direction we are working closely with the Artificial Intelligence and Approximate Reasoning (GIARA) team of the Public University of Navarra, as well as including renowned artificial intelligence and data science experts in our own team.  Finally, we have also established collaboration links with research centres as well as public and private companies.

Fields of research

Artificial intelligence

Since the creation of the innovation hub, data science and artificial intelligence have been postulated as our main line of research and technological development. Such potential should enhance our ability to synthesize the knowledge that production teams currently possess into artificial models, automating lesser added value or repetitive processes and improving the more complex processes with the support of these techniques.

Research into machine learning and deep learning models and techniques are taking our developments to another scale of results.   Our know-how and efforts are directed to researching different neural network architectures and improving their practice through improved sample annotation, optimization and augmentation techniques.

The experience acquired in this area provides us with cutting-edge process engineering in data science workflow. So, the automation of data preparation, feature engineering, training, validation and deployment processes enables us to ensure the quality of the models and to keep them updated.

Finally, research into reliable tools for validating and contrasting metrics and the use of techniques to explain the prediction of neural network models allow us to continue improving the samples and models.

Data and images

Since the company was established in the 1980s, there has been a close relationship between Tracasa and digital and geographic information systems, as well as data processing in these fields. On the other hand, in order to achieve the hightest possible quality of our products and services, the distinct nature of the data has forced us to specialise in very different areas of science: geology, geophysics, mathematics, engineering, etc.

As everyone knows, the volume of information handled by companies is increasing all the time; either because more sensors are deployed collecting information or because the samples are more frequent and precise. Adequate knowledge of the nature of the data, together with mastery of data processing, will enable us to obtain more accurate and reliable results.

The lines of research in data and image processing are already giving very good results through mathematical and statistical models of clustering, outlier detection, classification, segmentation and change detection in the following data typologies:

Time series

The collection of information on a given phenomenon at sequential intervals in time is known as time series. There are many types of sensors that can generate time series, and their use and contribution depends on the objective being sought. An increasing number of sensors deployed worldwide (IoT) is leading to an increase in studies that can make time series available for use.

The predictability of physical and environmental phenomena increasingly requires working with time or space-time samples. The use of these series is vital and irreplaceable in the detection of anomalies, preventive studies and inference of future situations. This line of innovation is already consolidating and directly benefiting projects related to studies of environmental, atmospheric, meteorological and agricultural indices, etc.

Point clouds

The gathering of samples from LiDAR sensors generates a cloud of geo-referenced points for each bounce exerted by a pulsed laser emitted from the sensor on the surface under study. The reduction of costs and the continuous improvement of these sensors means that we have more and more captured data with a higher density of points.   The exponential growth of LiDAR data and the heterogeneity of the sensors in the market means that manual or semiautomatic processing is becoming increasingly impracticable.

This fact prommpted the opening of this line of research years ago, and due to the achievements already obtained has positioned us at the forefront of the market. Projects related to the generation of surface or terrain models, hydrography, mobile mapping, 3D cartography, etc. are already benefiting from the progress being made in this line of research.

Multispectral imaging

Multispectral images capture data in different ranges of electromagnetic wavelengths and store them in different bands.   These bands record not only wavelengths perceptible to the human eye (RGB), but also other wavelengths such as infrared. Obtaining this type of image ourselves from airborne surveys or with the use of satellite image providers, provides us with a very rich and varied repository of options with which to apply our technological developments.

The analysis and processing of this type of image is vital for the spatial-temporal studies in many of the projects we develop.  Research into improving the classification, segmentation, clustering and super-resolution of this type of image in an automatic and precise way allows us to substantially improve the results of our projects. Topics such as land use and coverage, studies on the effects of environmental disasters, urban development, forestation, deforestation, etc. are already benefiting from the advances in this line of research.

Imaging Radar

The emission of a small pulse of electromagnetic energy and the capture of echo with directional precision by measuring the time between emission and reception allows us to obtain images that, once processed, can reflect the relief of the surface. One of the most distinguishing characteristics of imaging radar is that it is not affected by cloud coverage and can be acquired at night.

In general, it is already very valuable for countless studies, and in addition to multispectral imaging it is possible to achieve levels of precision never before achieved. There are several works and projects that are already benefitting from the advances in this line of imaging radar research.   Themes such as crop classification, roughness, ground displacement and deformation, etc.


A recurrent feature in all research initiatives is the volume and ingestion of data through processing or computing. In itself, technological development in the area of computation is of great value, but it takes on strategic overtones when success in other research areas is closely related to this technological advance.   For this reason, we have paid special attention to this subject and it is one of the most important areas of development in the R&D team.

The optimized use of batch or streaming processing requires adequate and scalable infrastructure, which often goes beyond the limits of our capacity and forces us to rely on cloud providers and computation centers.

The use of distributed computation infrastructures, high-performance computing centers (HPC), and the incorporation of GPGPUs in data processing phases are the minimum requirements where real time or near-real time processing is supported.

Data intensive and computational latency projects, such as the classification of LiDAR in Navarra, are already benefiting from the progress made in this area.