Groundbreaking research by Czech physicists opens door to faster data entry

Illustrative photo: Pixabay, CC0 Public Domain

Experts from the Institute of Physics at the Czech Academy of Sciences recently made headlines with groundbreaking research in which they uncovered a method for data entry and storage in computing that is considerably faster than what is available at present. The team was able to prove that Spintronics based on antiferromagnets could enter data 1000 times faster than in common memory media. Their findings made a splash within the scientific community and it's easy to see why: it has the potential to fundamentally change computing years down the line. Not least in the area of AI research, if a neuromorphic hardware component is achieved.

Tomáš Jungwirth,  photo: Marián Vojtek
I spoke to Tomáš Jungwirth, the head of the Department of Spintronics and Nanoelectronics at the Academy of Science's Institute of Physics about the findings. He began by explaining how classic Spintronic devices have operated until now and disucssed how things could change moving forward.

"Although the term 'spintronics' is newer, spintronics devices have already been in mass production for decades: sensors in hard drives, hard drives of course allowing things like the Internet and virtually unlimited storage of data. And in hard drives you can see the difference between classic sensors and spintronics ones.

"Classical sensors which used to be hard drives two or three decades ago are just coils, just electromagnets, while in spintronics we rely on the smallest devices we have available in microelectronics, which are the spins of electrons, tiny coils if you will which are embedded in every elementary particle which is the electron. From a physics point of view, spintronics represents a complete shift in the paradigm because man-made coils - electromagnets - are of course bulky and large and the bits that they can sense have to be large.

"But when you use the spins of electrons, the micro devices are much smaller, they are in fact nanodevices and the buts can be scaled down which is the reason today's hard drives have such enormous capacity, allowing unlimited storage."

Besides size, what were other advantages of spintronics in electronics?

"Downscaling has been the principle that governed microelectronics for half a century but the problem is that now we are almost officially at an end. Moore's Law, which has driven the field, is gone. We have reached the limits and there is virtually no way to go beyond that. We are almost reaching the limits of inter-atomic distances in crystals so that calls for new principles if we want to still upgrade our hardware and spintronics is one of the very few alternatives.

"Downscaling has been the principle that governed microelectronics for half a century. The problem is that now we are almost officially at an end."

"It is actually fundamentally different from classical electronic devices because it uses two basic quantities that you have in the electron: not only the charge but the micro coils if you want from the spins, which are not used in any other electronic devices. So this allows us the possibility of proceeding beyond the possibilities of semi-conductors: spin is the quantity in which we can store data, the way we do in hard drives, but we can also use the charge of the electron simultaneously.

"The idea is have all f the functionalities of the computer, that means the logic is now the domain of semi-conductors and storage and memory which is the domain of magnets, to be combined in one micro nano element to do all the functionalities that we need in the computer. So this is one of the ideas of how to further miniaturize computer technology."

What does that mean for the continuation of storing everything in 1's and 0's?

"We can still continue with that paradigm, that is, digital computing where everything is stored in 1's and 0's and algorithms based on this, but the hardware that will work with 1's and 0's and will also store date in long term or short term memories, will be different.

"They will be different in a way that they will allow for fast computing and the faster storing of information and will also allow for lower energy consumption. That s extremely important at a time when we are moving away from only having big computers in offices to the so-called Internet of things, when you have microelectronic devices spread across streets with cars and autonomous vehicles and all that, which will require speed but also low energy consumption. This is something which is difficult to provide via conventional microelectronics."

Illustrative photo: Pixabay,  CC0 Public Domain
That brings us to one of the central parts of your research, the centerpiece in a way, is antiferromagnetism. How does this shake up the paradigm?

"The use of antiferromagnetism is unlike any other spintronics devices currently in production. I mentioned hard drives but you also have solid state memory chips which this year are being turned into a mass production type product.

"It is important to say that the research that we are doing in antiferromagnetics is really at the beginning. Right now we are talking about science rather than immediate applications. But again, it is a different approach. In Nature you have two basic types of magnets, ferromagnets where the electron spins are aligned parallel and together they form a big magnet which we can feel on the outside, these are your basic kitchen or fridge magnets. Those are the basis of existing spintronics technology.

"Then there is the second group of magnets, actually more abundant, antiferromagnets which have spins that alternate when you go from one atom to the next. This was seen as interesting but in the past it was thought that you couldn't really use it because the magnetic force was not sensed and could not be manipulated. But we came up with a new physics principles which allowed means for both and also we showed that you can store information, and this opens a new window in materials research and also in physics where this type of magnet was ignored for almost a century."

You are saying it almost casually but this is the big news, this is the sort of turning point...

"Well we will see how important it will be for applications, if at all. We know from history that if a discovery turns into an application within 10 or 20 years, that is an exception. For that we will need to wait for that a decade at least. But in the scientific field, I can say that the response has been very, very positive and people from our team have gotten invitations to the major conferences and it appears the community is eager to learn more about our research.

"Right now, we are using devices designed for digital computing, to simulate the behavior of a neuron or a synapse but have no hardware device that would behave like those automatically."

"There are a few advantages that we can already foresee: one more obvious is that because you cannot detect magnetic movement from the outside that also means it can not be disturbed from outside magnetic fields. Which are present everywhere: anywhere you have microelectronic devices wherever you have electrical current, you will have disturbing magnetic fields. This is one of the issues for classic spintronics: being too sensitive to magnetic field perturbations. So this is automatically taken care of by antiferromagnets.

"Another advantage is that if you have a ferro bit and you want to make the pitch, the separation of bits on a chip, smaller and you have higher and higher density memories, the magnets begin to 'talk' to each other but of course in a way that we don't want them to. It is unintentional crosstalk: and if you have antiferromagnets you can put them as close as you want and they will not influence each other.

"The next aspect which we think is extremely interesting is that they are potentially much faster if you want to write information in antiferromagnets you can do so by physics principles 1000x faster than with ferro magnets.

"And finally what we already know is that antiferromagnets have potential not only for digital electronics which use the binary code but can store information continuously and we can have sort of analog behavior if you want so you can store many different states on a bit and the bit behaves more like a neuron or synapse in our brain rather than a typical digital computer device. So maybe for artificial neuromorphic network hardware devices this could be a good material basis."

The history of AI has sometimes been described as a history of "failure" - scientists wereoptimistic in the 1950s and '60s that we would see a major advance forward within a decade. But what we have seen in recent years IS a big jumpforward in narrow AI. The genie in the bottle is still artificial general intelligence (AGI) - will that ever be achieved?

Illustrative photo: geralt / Pixabay,  CC0
"Right, yes you are right: the field is not new but we now live in an era of big data. And that is extremely important because you need to teach the neuromorphic network a lot of data before it can really function in the right way. And of course today we have the means of how to collect big data, through the Internet and the Internet of things, and of course our computers now are very powerful so that they can process the data and can emulate or simulate neuromorphic-like behaviors and then begin using the other method, that of recognizing patterns, as opposed to rigorous algorithms, to learn, so that the system is learning though patterns and associating things in the past, something that we do in our brains.

"So we are now living in an era when this is becoming very relevant and there are applications such as pattern recognition or language translation that are already becoming a reality because of existing computing. And the thing to realize about existing computing is that we use basically semi-conductor devices which are digital, 1's and 0's, and we use these devices, designed for digital computing, to simulate the behavior of a neuron or a synapse, rather than having a hardware device that would behave like a neuron or synapse automatically. Right now we do not have a hardware component and we do it via software to emulate the behavior of our brain. It would be a huge step forward if we really had hardware components that did not behave according to 0's and 1's but like synapses and neurons.

"At this point, there are really no champion candidates. There are a few, called memristors, in which people are trying to build neuromorphic networks, but there, there are severe physical limitations. So there are only a few existing candidates. Antiferromagnets are a newly appearing system which kind of naturally have this property: we don't know if at the end it is going to be useful enough or enough to be used in applications, but it is certainly worth exploring because we don't have that many options."

It seems to suggest that one day we could achieve AGI or an augmented consciousness to help tackle problems faced by the world today.

"There are already candidate material systems which can behave like our brain neurons or synapses but they just are not good enough."

"I am very positive about this. I mean, there are already candidate material systems which can behave like our brain neurons or synapses but they just are not good enough but I don't see any fundamental physics limitation.

"It is certainly worth investing in such research and there is a really good chance we will get to such a device. It could be very useful in solving very complex problems. Interestingly enough, there are complex problems which are very difficult to solve within physics or even science itself using algorithmic 0's and 1's digital computing and even there using neuromorphic approaches could accelerate the very research itself.

"In my ideal world we will use neuromorphic approaches to optimize or find the best material which will be the next hardware neuron or synapse. Then we will use that for solving problems as well."

As we have shifted to discussing potential applications in AI and it bears saying that part of the scientific community, spearheaded by innovators like Elon Musk or the late Stephen Hawking, is quite concerned about where general artificial intelligence could lead. On the one hand, it could be very useful as a tool in finding solutions, but it could also opt for an optimal solution, theoretically, in which humankind had no place, so to speak. Others are more optimistic. So is it a concern?

"It is but that is common for any technology. And in fact, all of what you are saying is already happening: we already have AI software which allows two computers to communicate with each other on something like a social network and sometimes it is very difficult to recognize if it is a human being on the other end or a computer.

"It can be very difficult to recognize whether the person on the other end of the line is a computer or a real human being. I think that is already happening, it is an issue, but my feeling is the same as with any technology: we should be smart enough to use it mostly to our advantage and not disadvantage."