Quantum computers after IBM’s Nighthawk and Princeton: are we entering a new phase?

While most tech headlines revolve around AI, something equally exciting is happening in the background: quantum computers are slowly entering a more serious, more practical phase. In a short span of time, two major announcements landed:

  • IBM, at its Quantum Developer Conference, unveiled the new Nighthawk processor with 120 qubits and a clear roadmap toward fault-tolerant quantum machines.
  • A team of engineers at Princeton University has built a new type of superconducting qubit made of tantalum and ultra-pure silicon that can hold quantum information for much longer than is common today.

For people in IT, crypto, and science, these are two sides of the same story: how to get more qubits and more stable qubits at the same time.

Illustration of a quantum chip with qubits arranged in a network


IBM’s Nighthawk: 120 qubits and a roadmap to 2029

At its conference, IBM presented the new IBM Quantum Nighthawk processor. It is their most advanced chip to date, featuring:

  • around 120 qubits connected through
  • more than 200 tunable couplers, which enable more complex interactions between qubits
  • the ability to run quantum algorithms that are significantly more complex than those on the previous processor generation, while keeping error rates low

IBM is very open about its roadmap:

  • over the next few years – reaching so-called quantum advantage (a situation where a quantum computer is practically more useful than a classical one for specific problems)
  • by the end of the decade – building the first truly fault-tolerant systems, i.e. machines that can run for long periods while actively correcting errors

In parallel with Nighthawk, IBM is also developing an experimental chip often referred to under the codename Loon, which serves as a testbed for advanced error-correction codes, new lattice geometries, and ultra-fast decoders. The idea is for Loon to prove that hardware and software for fault-tolerant operation can work together as a complete system before all of this is packed into larger machines.


Princeton’s qubits: holding quantum “focus” for longer

The second part of the story comes from Princeton University. Their team has developed a new version of a superconducting “transmon” qubit, built from tantalum on ultra-pure silicon. The result:

  • coherence times longer than one millisecond
  • in experiments, coherence times of up to about 1.6 milliseconds
  • this is many times longer than what is currently considered an industry standard

Why does this matter?

Because a quantum computer has to perform thousands to tens of thousands of quantum operations before it “finishes” a computation. If a qubit “forgets” its state before the algorithm is done, the result is practically useless. Longer coherence time means:

  • more computation steps before the system falls out of its quantum state
  • less aggressive (and less expensive) error-correction schemes
  • easier integration with existing chip designs, because the materials are compatible with standard semiconductor manufacturing

Why is coherence time so important?

A classical bit can stay 0 or 1 for as long as you want, but a quantum qubit stays in a superposition of states only for a limited time – that window is its coherence time. After that:

  • noise from the environment,
  • imperfections in the material,
  • or errors in the control signals

break the superposition and the quantum information is lost.

In an ideal world we would like to have:

  • many qubits (hundreds or thousands)
  • with strong connectivity (so they can “talk” to each other)
  • and long coherence times

IBM’s Nighthawk attacks the problem of scaling and connectivity, while Princeton tackles the problem of how long a single qubit can stay coherent. These are two key pieces that need to fall into place if we want to move from demonstrations to practical quantum computers.


What could this mean in the next 5–10 years?

In the short term, these news items will not change the daily work of programmers, traders, or the average user. Quantum computers are still:

  • expensive,
  • highly specialized,
  • mostly accessible through cloud services and research programs.

But in the medium term (5–10 years), it is reasonable to expect:

  • quantum machines that, for specific tasks (optimization, material simulation, chemistry, financial modeling), deliver real, concrete advantages over classical computers
  • increasingly serious work on post-quantum cryptography, because classical crypto algorithms (RSA, ECC, and others) will gradually need to adapt to a world where quantum computers exist
  • hybrid systems where part of the workload runs on classical CPU/GPU hardware, and part on a quantum backend via specialized APIs

For the crypto community and the broader Web3 ecosystem, this is a signal that over the coming decade security will not be a “set and forget” topic – it will have to be revisited as quantum hardware advances.


Conclusion

IBM’s Nighthawk and Princeton’s qubits with extended coherence times are more than just another scientific headline – they represent concrete steps toward practical quantum computers.

On one front, the goal is to have more and better connected qubits; on the other, to make each individual qubit more stable and longer-lived. When these two lines of progress eventually meet, we will get new classes of computation that classical machines simply cannot match.

For now, quantum computers remain a high-end laboratory story, but these developments show that the transition from theory to practice is already underway – and that the coming decade could be a turning point not only for science, but also for finance, crypto, security, and the complex AI systems that may one day run on quantum hardware.