By combining purpose-built materials and neural networks, researchers at EPFL have shown that sound can be used in high-resolution images.
Imaging allows us to describe an object through the remote field analysis of light and sound waves it transmits or radiates. The shorter the wave, the higher the image resolution. However, the level of detail is limited by the size of the wavelength in question ̵1; so far. Researchers at the EPFL Wave Engineering Laboratory have successfully proven that a long wave and, consequently, inaccurate (in this case a sound wave) can produce details that are 30 times smaller than its length. To achieve this, the research team used a combination of specifically created metamaterial elements – and artificial intelligence. Their research, which has just been published in Physical Review X, is creating exciting new opportunities, especially in the fields of medical imaging and bio-design.
The basic idea of the team was to combine two separate technologies that had previously pushed the boundaries of the image. One of these is metamaterials: elements created for purposes that, for example, can accurately focus wavelengths. That said, they are known to lose their effectiveness by absorbing signals without signals in a way that makes them difficult to decipher. The other is artificial intelligence, or more precisely neural networks that can quickly and efficiently process even the most complex information, although there is a learning curve involved.
To overcome what is known in physics as the diffraction limit, the research team – led by Romain Fleury – conducted the following experiment: they first created a grid of 64 miniature speakers, each of which could be activated by pixels in an image. Then they used the lattice to reproduce sound images of numbers from zero to nine with extremely accurate spatial details; images of numbers inserted into the lattice are retrieved from a database of approximately 70,000 written examples. Beyond the net, the researchers placed a bag containing 39 Helmholtz resonators (10 cm sphere with a hole at one end) that formed a metamaterial. The sound produced by the lattice was transmitted by metamaterics and was picked up by four microphones placed a few feet away. The algorithms then deciphered the sound recorded by the microphones in order to learn how to recognize and rebury the original numerical images.
A favorable barrier
The team achieved a success rate of almost 90% with their experiment. “By generating images with a resolution of only a few centimeters – using a sound wave whose length was approximately one meter – we passed well beyond the diffraction limit,” says Romain Fleury. “In addition, the tendency of metamaterials to absorb signals, which was considered a major disadvantage, turns out to be an advantage when neural networks are involved. We found that they work best when there is a large absorption.”
In the field of medical imaging, using long waves to see very small objects can be a major breakthrough. “Long waves mean that doctors can use much lower frequencies, resulting in acoustic imaging methods that are effective even through dense bone tissue. When it comes to images that use electromagnetic waves, long waves are “For these types of applications, we will not train neural networks to recognize or reproduce numbers, but rather organic structures,” says Fleury.
New metametics manipulate sound to enhance acoustic images
Bakhtiyar Orazbayev et al. Acoustic images with sub-ground wavelengths from deep learning, Physical Review X (2020). DOI: 10.1103 / PhysRevX.10.031029
Provided by the Ecole Polytechnique Federale de Lausanne
citation: Deep learning and metamaterial materials make invisible (2020, August 10) retrieving August 10, 2020 from https://phys.org/news/2020-08-deep-metamaterials-invisible-visible.html
This document is subject to copyright. Except for any fair action for the purposes of private study or research, no part may be reproduced without our written permission. Content is provided for informational purposes only.