SCoPE_mittel

fringe13banner

Kolloqbanner3

 
 
 
  Institut für Technische Optik
  Universität Stuttgart
  Pfaffenwaldring 9
  70569 Stuttgart
  Deutschland
  Tel:  ++49 (0)711/685-66074
  Fax: ++49 (0)711/685-66586
  e-mail

 

 


zur Startseite

Waveoptical Computing

 

 
 
Basic Idea
Introduction: Mazes
The travelling salesman: Mazes
Ultra-fast computing
References

Basic Idea

White-light interferometry is a very powerful technique for measuring the topography of surfaces or  for imaging layers within scattering media. Apart from these well-known approaches it can be also used for solving computational difficult problems.
To this end, the problem to be solved is coded by optical path lengths and the superposition of all possible paths that a photon can travel is used for computing the solution. The solution itself is chosen by interference with the reference light. Several gedankenexperiments demonstrate how this method can be used for solving computational hard problems.


The core of the method consists of three steps:
 
  • Code the problem and its solutions by optical path lengths.
  • Use an optical system to generate a superposition of all possible solutions.
  • Find the correct solution by interference-based detection.

The method is strongly related to quantum computing, but works with purely classical waves; no quantum properties like entanglement are involved. Oltean proposed a pulse-based method for computing that is very similar to this interference-based technique [1,2].  The main idea is the same as here, namely using superposition of all possible solutions which are coded by optical path lengths. Due to the so-called "coherent gain'' that is exploited using interferometric detection, interference-based detection is superior for practical implementations [8].


Introduction: Mazes and Ricochet Robots

The basic idea of the method  is best demonstrated with a simple problem that already is given in terms of paths. Fig. 1 depicts how we can solve a typical maze problem. The maze is equipped with mirrors so that light entering the maze simultaneously runs through all possible paths. Interference will be detected if the optical path lengths (OPD) of the two interferometer arms are equal. Therefore, one increases the delay of the reference arm until interference is detected. Now the input to the interference detector is moved (by an optical fiber) back into the maze to the last junction of the maze. The reference path length will be decreased by the same distance so that we still detect interference. Therefore, we can  test (by holding the fiber to all possible arms of the junction) from where the "correct'' (meaning interfering) light comes. We repeat the process until we reach the input of the maze. For solving the maze we have to make K measurements where K equals the number of junctions along the solution path.

 

 

Fig. 1: Solving a maze-type problem by white-light interferometry: A photon simultaneously runs through all paths of the setup. The shortest possible path is selected by interference (see text).

 

The computational cost of the solution therefore increases proportional to N if we denote the size of the maze by N x N. This should be compared with traditional computer-based algorithms where we have an increase proportional to N*N. This improvement is not obtained for free because the number of photons that are needed for the measurements increases exponentially with N. This is important and finally limits the maximum problem size that can be solved by the superposition approach. A detailed analysis shows that for 1 W of power at λ

= 1

µm and a signal-to-noise ratio of 1 we could in principle solve mazes with about 70 junctions along the solution path. Depending on the type of maze this corresponds to a lateral maze size of some hundred elemental cells. For other problems similar limits exist. The interference-based detection is helpful since it leads to a better signal-to-noise behavior of the method. In principle it would also be possible to perform the experiment with (short) light pulses. In this case we would detect the time when a pulse arrives at the exit of the maze.

 

Fig. 2: Solving the Ricochet Robot problem with the same interference-based technique as shown in Fig. 1

 

From the computational point of view solving a variation of the maze problem is more interesting. Fig. 2 shows the board game "Ricochet Robots'' (German: "Rasende Roboter") and its optical solution. This game is NP-complete but still the optical solution can be found by the same method as used for the maze with a small number of measurements proportional to the number of junctions. We just have to replace the walls by ``optical wall elements'', namely X-cubes.

 

The importance of this result stems from the fact that the problem belongs to the class of so called NP-complete problems.  `NP-complete'' more or less means that for such problems the runtime for finding the optimum solution increases faster than polynomial with the problem size N. It can be shown that once a solution to one NP-complete problem is found it will be possible to map that result (in polynomial time) to any other NP-complete problem. Therefore, fast optimum solutions to an NP-complete problem are of considerable importance for all other NP-complete problems.





The travelling salesman

Another famous example of an NP-complete problem is depicted in Fig. 3: A salesman wants to visit a set of N cities. The task is to find the shortest tour length through all the cities with the additional constraint to visit each city exactly once. In the classical TSP the trip should end at the same city where it has started. That means we have a round-trip.

It is easy to see that for N cities there are (N-1)!/2 possible tours. So even for a moderate number of cities one would have to check an astronomical number of possible tours (e.g. 30 cities lead to 4.4 x 10^30 tours) in order to find the shortest one. For 60 cities we would have more tours than there are atoms in the universe. 

Lots of quite good algorithms are available that give very good solutions to this problem but it has been proven, that no polynomial time heuristic can guarantee the optimum solution of the TSP of a given size (assuming the widely believed conjecture P does not equal NP). So the global optimum for large problems in general cannot be found.

 

Fig. 3: Solving the travelling salesman problem by white light interferometry. It is necessary to introduce delays with exponentially increasing length into the setup.

 

 


The proposed system (as shown in Fig. 2) is mainly again a white light interferometer with a complicated signal path. A wave-packet with limited temporal extension (due to the short coherence length) entering the signal tour will travel through the network of N connected cities and finally might exit at the final (=initial) city. If the optical path length through the network corresponds to the reference path length, interference at the detector will be observed. 

The cities are connected by input and output fibers with a length corresponding to their distance in reality.

 

 

Fig. 4: The structure of one city in the TSP

 

Fig. 4 shows the structure of one city. It has N input and N output channels. A photon entering one of the input channels will travel through a fiber optical delay line before entering a fan-out element that outputs the photon to one of the output channels. In the quantum mechanical particle model the choice of the output port is given purely randomly. In the wave-optical model of course we have a splitting of the wave to the N output channels. To make the cities distinguishable we use additional delay lines of different length within the cities. How the individual delays should be set and how one can then analyze the measurement result to obtain the solution of the TSP is described in detail in [3,4].

 

Again the power (and the limitation concerning the maximum problem size) is due to the superposition of all possible paths that a photon can travel. Strictly speaken, the reduction form NP to P is not completely achieved because the delays increase exponentially in length with increasing problem size. This means that after setting up the experiment and switching on the light source one would have to wait until a possible paths are flooded by photons. So this setup time would increase exponentially. For the Ricochet Robot application this is not a limitation because there we would not need exponentially increasing delays.



Ultra-fast computing

 

In the following I will propose a system that leads to the fastest optical computation (of arithmetic expression) that one can think of. Fig. 5 shows  a simple example of how to compute the sum of two numbers using the waveoptical computing approach. In the figure, of course, interference will be detected if C equals A+B. Subtractions and multiplications with fixed constants are equally easy to implement.

 

 

Fig. 5: Example of the most simple analog-optical computation (addition) with the method.

 

We can easily extend that to a digital-optical version by replacing the analog path lengths with quantized lengths using a binary representation of the numbers. For the lower path (encoding C) we want to realize all possible integer path lengths

 

simultaneously. Different optical methods are possible to achieve this. In Fig. 6 we employ beamsplitters. We use the beamsplitters to make sure that a photon entering the lower path will run exactly once or not at all through a loop representing one bit.

 

At the output of the reference path we therefore obtain the superposition of all 2^N possible integer path lengths with N being the number of bits. One of these 2^N outgoing light fields will interfere with the light running through  the upper path (through A and B), and, of course, this corresponds to the unknown solution C. How can we now check which photons took the correct path? Fortunately, we do not have to know this in order to find the unknown C. We just block the loop of bit i of C to check if bit i really is set. But we can even do better because we can

 

easily perform these N blockings in parallel. For each bit i we build the same basic system with a block of the i-th loop. So, we have N interference detectors, and  by looking at these detectors we immediately obtain the binary representation of C and the solution is found in parallel in one step.

 

 

Fig. 6: Digital-optical implementation for the computation of one bit of the solution of the subtraction of A and B (both 4 bit). For every  bit of the solution one has to implement a block like the lower part of the figure.

 

How fast is such a computation? At first, one might assume that the speed is completely determined by the time a photon needs to travel through the network before interfering at the detector. Fortunately, this is not true. When switching one bit of B, it takes only the time that the photon travels until it hits the detector

after

passing the switch (provided that there are already photons in the network). The crux of this is that we should build our system in such a way that the distance between detector and switches is minimized. By making this distance very small, we can compute complex operations as fast as the time of flight between the switches and the detector. This is in contrast to other approaches of digital-optical computation and only works due to the superposition of all possible results.

 

Fig. 7: Fast version of the computation with serial number representation.

 

For the maximum speed that one can hope to achieve by optics we therefore would have to abandon the binary system and put the switches controlling the input operands directly in  front of the output detectors. That way the small distance between input and output (e.g. 50 µm) is the only latency that is present in our computation. For 50 µm this corresponds to something in the range of 170 fs (femtoseconds).

 

Of course, for a practical implementation the number of bits for the operands (and the result) would be strongly limited by that number representation. But one could employ residue representation in order to achieve the same speed even with high accuracy (see Ref. [9] for the finally proposed system).

 

The main drawback (apart from the fact that we do not need such ultra-high speed computing of simple arithmetic expressions) is that we cannot  cascade operations without sacrificing speed.



References


The proposed method of using superposition of different paths can be used for quite different applications in computing. More detailed information concerning the presented  ideas (and some more applications) can be found in the following references.

 

[1] Oltean, M., “A light-based device for solving the hamiltonian path problem,” Lecture Notes in Computer
Science 4135, pp. 217–227, 2006.
[2] Oltean, M., “Solving the hamiltonian path problem with a light-based computer,” Natural Computing 7,
pp. 57–70, 2008.
[3] Haist, T. and Osten, W. “An optical solution for the traveling salesman problem,” Opt. Express 15, pp. 10473-10482,   2007.
[4] Haist, T. and Osten, W., “An optical solution for the traveling salesman problem: erratum,” Opt. Express 15, p. 12627, 2007.
[5] Oltean, M. and Muntean, O., “Solving the subset-sum problem with a light-based device,” Natural Computing
, pp. 10.1007/s11047–007–9059–3, 2007.
[6] Dolev, S., Fitoussi, H.: The traveling beams optical solutions for bounded npcomplete
problems. Lecture Notes in Computer Science 4475 (2007) 120–134
[7] Oltean, M. and Muntean, O., "Exact cover with light", http://arxiv.org/pdf/0708.1962
[8] Haist, T., "Ultra-fast waveoptical computing", Proc. SPIE 7072, 7072M, (2008)
[9] Haist, T., Osten, W., "Ultra-fast digital optical arithmetic using waveoptical computing", Lecture Notes on Computer Science 5172, pp. 33-45 (2008)
[10] Oltean, M. and Muntean, O.,Solving NP-Complete Problems with Delayed Signals: An Overview of Current Research Directions, Lecture Notes on Computer Science 5172, pp. 115-128 (2008)