Browsed by
Category: Applied Math

Energy Levels of Molecules are Bounded Below

Energy Levels of Molecules are Bounded Below

Editor’s Note: This guest submission is from Nikita Lisitsa, a professional software developer and mathematician. You can follow him on Twitter.

Are Molecules Perpetual Motion Machines?

Short answer: no, of course not. Perpetual motion machines do not exist. There are deep theoretical reasons for that, as well as less deep but nevertheless convincing experimental data. That’s why we care so much about renewable energy and stuff: we know that energy supply is limited in any real physical system.

From the perspective of mechanics (classical, relativistic, or quantum) when you take some energy from a system, it jumps to a state with lower energy. If there were a system with no lower bound on energy levels, we could withdraw energy forever, forcing the system to fall lower and lower on the energy ladder (this is roughly what forced Dirac to introduce the concept of the Dirac sea). Thus, a sufficiently good mechanical system should have a lower bound on possible energy states.

We now focus on atoms and molecules. What would bound their energy levels from below? To begin, we need a good framework to describe these energy levels. Unfortunately, classical mechanics fails here: the virial theorem implies that for a potential of the form V(r) \sim \frac{1}{r} the time average of kinetic and potential energies, denoted \langle T\rangle and \langle V\rangle, respectively, are related by 2\langle T \rangle = -\langle V \rangle,and the total energy E (which is conserved) is given by E = \langle E \rangle = \langle T \rangle + \langle V \rangle = \frac{1}{2} \langle V \rangle.

For a single electron (which has charge -e) moving in a Coulomb field of a nucleus with charge Ze, the potential energy is V(r) = -\frac{kZe^2}{r}, where k is a Couloumb constant. Thus, if the distance r to the nucleus tends to zero, the potential energy tends to -\infty, and so does the total energy E. This, together with the fact that an electron orbiting a nucleus would lose energy in the form of electromagnetic waves, was one of the major problems of classical mechanics that needed a “quantum treatment”.

Annoyingly Fast and Heavy Introduction to Quantum Mechanics

Let’s dive into the (non-relativistic) quantum world. Unfortunately, we are immediately forced to accept an approximation; as of this writing, I’m unaware of a solution that doesn’t use one. The Born-Oppenheimer approximation suggests that since the protons and neutrons are about 2000 times heavier than electrons, they move much slower and can be considered static. Thus, we must now prove that a system of N electrons moving in a field of K fixed nuclei has a lower bound on possible values of energy.

Under this framework, a molecule is a carefully chosen separable Hilbert space of possible states together with a “nice” essentially self-adjoint operator acting on it. The Hilbert space we are will use is denoted (L^2(\mathbb{R}^3))^{\otimes N} \cong L^2(\mathbb{R}^{3N}), where \cong denotes an isomorphism. The notation indicates that we can represent an element of our Hilbert space in two equivalent ways; we’ll use the second, which is a square-integrable function of 3N real variables which are coordinates of the electrons – 3 coordinates for each of the N electrons. The inner product of two functions \psi and \phi is defined as an integral over the whole 3N-dimensional space:

\langle\psi,\phi\rangle = \int\limits_{\mathbb{R}^{3N}} \overline\psi(r_1\dots r_N) \phi(r_1 \dots r_N) dr_1 \dots dr_N

Remark: The actual space should be tensored by \mathbb C^{2^N} to account for spin, but the effect of spin on energy is too small for us to care in this case. Nevertheless, it exists. Furthermore, the elements of the space are actually not functions;they are equivalence classes of functions modulo function having zero norm. Following the usual colloquial terminology, we’ll call them functions nevertheless; we’ll refer to them as wavefunctions.

A physical state is not the same as a wavefunction: you can multiply the wavefunction by an arbitrary non-zero complex constant, and the result represents the same state. States form some peculiar kind of space; we shall stick to wavefunctions, which are elements of a vector space so we can use the full power of linearity. Thus, it is common to assume wavefunctions to be normalized such that1 \langle \psi,\psi\rangle=1. We shall assume this too.

Next, we need to define the Hamiltonian. Here, the Hamiltonian is similar to that in non-quantum mechanics, with the exception that we must quantize it: turn numbers into operators. The general solution to this problem is unknown. (See Groenewold’s theorem on the inconsistency of canonical quantization and John Baez’s article series on some modern approaches). In our case, however, the recipe is simple: turn the kinetic energy \frac{p_i^2}{2m} into -\frac{\hbar^2}{2m} \Delta_i and turn the potential energy V(r_1,r_2,\dots,r_N) into an operator that multiplies a wavefunction by V.

The energy consists of 

  • the kinetic energy of electrons,
  • the attraction of these electrons to the nuclei, and
  • the repulsion energy between electrons

The usual quantum-mechanical way to write this is:
H = -\sum\limits_i \frac{\hbar^2}{2m} \Delta_i – \sum\limits_{i,A} \frac{kZ_Ae^2}{|r_i – R_A|} + \sum\limits_{i,j} \frac{ke^2}{|r_i – r_j|} We now detail each item in the above equation

  • i,j are indices that run over the electrons in the system
  • A runs over nuclei
  • \Delta_{i} is the Laplacian with respect to i-th electron coordinates, i.e. \frac{\partial^2}{\partial x_i^2}+\frac{\partial^2}{\partial y_i^2}+\frac{\partial^2}{\partial z_i^2}
  • m is the electron mass
  • k is the Coulomb constant
  • Z_A is the number of protons in the A-th nucleus
  • e is the electron charge
  • r_i is the position of i-th electron
  • R_A is the position of A-th nucleus (which are fixed by Born-Oppenheimer)

Nobody likes constants, right? Let’s stick to atomic units instead, where m=k=e=\hbar=1, and the speed of light is 137 (this is exactly \frac{1}{\alpha}, where \alpha is the fine-structure constant). The Hamiltonian becomes a bit cleaner: H = -\sum\limits_i \frac{1}{2} \Delta_i-\sum\limits_{i,A} \frac{Z_A}{|r_i – R_A|} + \sum\limits_{i,j} \frac{1}{|r_i – r_j|}

What does it mean? An operator should act on wavefunctions. For a function \psi, the operator takes some second derivatives of \psi, multiply \psi by other functions, and adds all this up. The result is another function – the result of operator acting on \psi, called H\psi. One important thing is that this operator is essentially self-adjoint, though we shall treat it as if it’s adjoint: this follows from the Laplacian \Delta_i being self-adjoint, and multiplication byreal-valued function being a self-adjoint operator as well.

The energy (well, the average or expectation value energy), of a state \psi is \langle \psi, H\psi\rangle (the inner product of \psi and H\psi), which happens to be equal to \langle H\psi , \psi\rangle, thanks to self-adjointness. Physicists love to use bra-ket notation for expressions like this, and write it as \langle \psi | H | \psi\rangle.

Concerning Energies

We have a space of functions, a differential operator acting on these functions, and an integral that we wish to bound from below whenever the functions are normalized.2

Our way to prove that the expression \langle \psi, H\psi\rangle is indeed bounded below will be broken up into three parts.

Ignoring Repulsion

A good thing about the average energy is that it is linear in the energy operator. This means that, for a sum of two operators, we have \langle \psi, (H_1+H_2)\psi \rangle = \langle \psi, H_1\psi \rangle + \langle \psi, H_2\psi \rangle, which means that we can analyze the terms in the Hamiltonian separately.

Now, the last term is electron-electron repulsion. Let’s write it down again:
\sum\limits_{i,j} \frac{1}{|r_i – r_j|} The only thing that matters for our purposes is that this operator is multiplication by a nonnegative function, which we’ll call f(r_1,\dots,r_N)=\sum\limits_{i,j} \frac{1}{|r_i - r_j|} for now. Non-negativeness implies that the expectation value of this term is never below zero:
\langle \psi, f \cdot \psi \rangle = \int\limits_{\mathbb R^{3N}} |\psi|^2 f dr_1 \dots dr_N \geq 0

Thus, since the expectation value is linear, we can simply drop the electron-electron repulsion term: this action can only lower the energy; since we want to find a lower bound on energy levels, this is okay.

Separating Electrons

Now the simplified Hamiltonian looks like this:
H’ = -\sum\limits_i \frac{1}{2} \Delta_i – \sum\limits_{i,A} \frac{Z_A}{|r_i – R_A|} The electrons do not interact anymore, which makes the total Hamiltonian a sum of single-particle Hamiltonians of individual electrons: H’ = \sum\limits_i \left[ -\frac{1}{2} \Delta_i – \sum\limits_{A} \frac{Z_A}{|r_i – R_A|}  \right] = \sum\limits_i H_i,

where each of H_i depends only on the i-th electron coordinates. In the tensor-product-version of the description of our Hilbert space of states, this means that H_i acts on the i-th component of the tensor product (L^2(\mathbb R^3))^{\otimes N}. Whatever Hilbert space description you use, this implies that the whole system Hamiltonian H' is separable, so analyzing the expectation values of H_i is, in essence, a single-particle problem: the wavefunction \psi(r_1, \dots, r_N) does still depend on all electrons’ coordinates, but they are ignored by H_i. So, if we find a lower bound for each of H_i, we are done.

Reducing to a Hydrogen-Like Atom

Everything before this section was actually pretty straightforward; it is now that we’ll do a simple trick. Look at H_i again:
H_i = -\frac{1}{2} \Delta_i – \sum\limits_{A} \frac{Z_A}{|r_i – R_A|} It would be awesome if we could put the Laplacian under the sum, for the resulting summands would be just the Hamiltonians of a hydrogen-like atom, — a quantum system with any charged particle orbiting around another charged particle. This is one of the most important quantum systems and one of the few that has an exact and known analytical solution.

So, let’s actually put the Laplacian under the sum!

H_i = \sum\limits_{A} \left[ -\frac{1}{2M} \Delta_i – \frac{Z_A}{|r_i – R_A|} \right] = \sum\limits_A H_{i,A},

where M is the number of nuclei. Now, each term of the sum is the Hamiltonian of an atom-like system, with a nucleus of charge Z_A and an electron orbiting it, except that the electron has mass M instead of 1. Thankfully, this is not a problem: plug m_e=M into the exact solutions of the hydrogen-like atom, and we get solutions for each H_{i,A} term of the sum above. We don’t need the actual solutions, though; all we need is the fact that expectation value of energy is bounded below in this solution.

Summing It Up

So, what we’ve done is shown H \geq H’ = \sum\limits_i H_i = \sum\limits_{i,A} H_{i,A}, and the expectation value of each H_{i,A} is bounded below, therefore the expectation value of H itself is bounded below.

This whole idea was inspired by the paper of Tosio Kato “Fundamental properties of Hamiltonian operators of Shrödinger type”. You’ll find many more details on the topic there.

All this is just a tiny bit of the whole story of why the world around us doesn’t explode/freeze/break apart. For a full exposition, check out “The stability of matter: from atoms to stars” by Elliott H. Lieb. I should thank Valentin Fadeev for introducing me to this awesome work.

References


Exploiting Chemistry for Better Packet Flow Management 5: Chemical Congestion Control, Design Motifs, and Conclusion

Exploiting Chemistry for Better Packet Flow Management 5: Chemical Congestion Control, Design Motifs, and Conclusion

This represents the final installment of the series reviewing the 2011 technical report by Meyer and Tschudin. Part 1 gave an overview of the report and the problems it aimed to solve, as well as the chemistry basics necessary for further understanding. Part 2 discussed artificial chemistries and the extension to an artificial packet chemistry, setting the mathematical framework for analyses and implementation. Part 3 discussed briefly some methods of network analysis available once the artificial packet chemistry was developed. Part 4 gave an implementation of a scheduler based on the packet chemistry model as well as the idea of a chemical control plane and its implementation. This final part will discuss one final application — a congestion control algorithm–as well as mention design motifs pointed out by the authors and conclude our analysis of this report. 

Chemical Congestion Control Algorithm

 
 
 
As a final application of chemical networks, the authors combine LoMA-scheduled queues and flow-filtering patterns to schedule segments of a transport protocol to control congestion. To illustrate, they re-implement the additive increase/multiplicative decrease of the congestion avoidance mode of TCP-Reno. The congestion control algorithm reacts to packet loss automatically and naturally. 
 
The reproduced figure above shows how the chemical congestion control is implemented as a chemical reaction network (and by extension, a queueing network). 
 
 
  1. Arriving packets are put into a queue D. The transmission rate \nu_{tx} is controlled  by the quantity of pacemaker molecules R, so \nu_{tx} = k_{1}c_{R}c_{D}, once again according to the Law of Mass Action. To mimic the additive (linear) increase mechanism of TCP-Reno, the number of pace-maker molecules is increased at a rate \nu_{\text{inc}}.
  2.  Before packets are transmitted, they are tagged with a sequence number. If there is a gap in the sequence number of acknowledgments from the destination, the source regenerates the packets at a queue L.
  3. A lost packet will catalyze the destruction of pacemaker molecules by another reaction r_{2}, which will lead to the exponential decay of R-molecules and thus decrease the transmission rate. However, we wish to prevent too fast a destruction of pacemaker molecules, so a third reaction r_{3} will delay the destruction. 
 
 
The authors encourage the use of such a reaction graph at the design phase of flow management policies. The feedback nature is much clearer. In addition, the papers give a formal proof that their congestion control model is TCP-fair at equilibrium; that is, the transmission rate is proportional to \frac{1}{\sqrt{p_{\text{loss}}}} where p_{\text{loss}} is the probability of packet loss between source and destination. They also discuss an extended version that reacts to variations in round trip time (RTT) variation to more fully exploit the link bandwidths. The traffic statistics are not computed symbolically with chemical reactions. Instead, another reaction builds a difference sch that at equilibrium the fill level of its queue is proportional to the excess transmission rate. That signal decays the pacemaker molecules. They also supply simulations to illustrate their implementations. 
 

Design Motifs

 
This section is of a particular interest to me personally, and was treated the least. No design process can ever be fully automated, but the authors claim to have developed several design motifs of chemical reaction networks for a variety of purposes, including arithmetic computation of fill levels to communication patterns (anycast, neighborhood discovery, etc). Unfortunately, they do not give a direct citation as to where to look further. This report will be updated when such information is found.
 
 
Figure 12 shows the only two motifs provided by the authors. One for rate limiting (a) and the other bimolecular reaction to compute the difference between arrival rates for two queues. The concept of using these as design elements is extremely intriguing, and it was unfortunate that the authors did not choose to expand this further. 
 

Conclusion

 
Meyer and Tschudin have given an extensive report showing how powerful the application of chemical kinetics and chemical networks can be for the computer networking space. There are several research opportunities available for further study and implementation. As yet, there have been no citations of this work of note (the report came out in 2011), and thus the opportunity seems ripe for exploration. 

References

  1. Dittrich, P., Ziegler, J., and Banzhaf, W. Artificial chemistries – a review. Artificial Life 7(2001), 225–275.
  1. Feinburg, M. Complex balancing in general kinetic systems. Archive for Rational Mechanics and Analysis 49 (1972).
  2. Gadgil, C., Lee, C., and Othmer, H. A stochastic analysis of first-order reaction networks. Bulletin of Mathematical Biology 67 (2005), 901–946.
  3. Gibson, M., and Bruck, J. Effcient stochastic simulation of chemical systems with many species and many channels. Journal of Physical Chemistry 104 (2000), 1876–1889.
  4. Gillespie, D. The chemical langevin equation. Journal of Chemical Physics 113 (2000).
  5. Gillespie, D. The chemical langevin and fokker-planck equations for the reversible isomerizationreaction. Journal of Physical Chemistry 106 (2002), 5063–5071.
  6. Horn, F. On a connexion between stability and graphs in chemical kinetics. Proceedings of the RoyalSociety of London 334 (1973), 299–330.
  7. Kamimura, K., Hoshino, H., and Shishikui, Y. Constant delay queuing for jitter-sensitive iptvdistribution on home network. IEEE Global Telecommunications Conference (2008).
  8. Laidler, K. Chemical Kinetics. McGraw-Hill, 1950.
  9. McQuarrie, D. Stochastic approach to chemical kinetics. Journal of Applied Probability 4 (1967), 413–478.
  10.  Meyer, T., and Tschudin, C. Flow management in packet networks through interacting queues and law-of-mass-action-scheduling. Technical report, University of Basel.
  11.  Pocher, H. L., Leung, V., and Gilles, D. An application- and management-based approach to atm scheduling. Telecommunication Systems 12 (1999), 103–122.
  12. Tschudin, C. Fraglets- a metabolistic execution model for communication protocols. Proceedings of the 2nd annual symposium on autonomous intelligent networks and systems (2003).

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Exploiting Chemistry for Better Packet Flow Management 4: Scheduler Implementation

Exploiting Chemistry for Better Packet Flow Management 4: Scheduler Implementation

This article is the fourth part in a series based on a report reviewing the technical report of Meyer and Tschudin[11] who have extended the notion of an artificial chemistry to an artificial packet chemistry with the intention of exploiting the natural behavior of chemical reactions to design better flow management policies for computer networks. Part 1 introduced the chemistry foundations necessary for study of this technical report, including the Law of Mass Action. Part 2 elaborated on the mathematical model of an artificial packet chemistry. Part 3 discussed the various types of mathematical analyses for various queueing questions now available with the expression of a computer network and its flow as an artificial packet chemistry. This part will discuss an actual engineering application of all the ideas discussed thus far–a scheduler and a chemical control plane.

Implementation of a Scheduler Based on the Law of Mass Action

Likely, this section will be of greatest interest to network engineers. The authors have indeed designed and implemented a scheduler that utilizes this approach in an elegant fashion. In addition, they discuss a “chemical control plane” that can automatically be compiled from the abstract model. In another application, they relax the static nature of the network to allow an active networking approach that reshapes the queuing network at run-time. The authors do discuss specifics of implementation, though this article will only briefly touch on it.

Scheduler

Each network node/reaction vessel has its own scheduler. The scheduler computes the next occurrence time of each rule r \in R_{i} in its local node (this is equivalent to “serving” or processing a packet or set of packets for bimolecular reactions) according to the Law of Mass Action. It then will sort the events into a priority queue, wait until the first event occurs, then execute. The main difficulty for a scheduler is to dynamically react and reschedule events properly as packets are added to or drained from its queues. The authors note that an efficient mass action scheduler can be implemented that requires only O(\log(|\mathcal{R}|)) time to enqueue or dequeue packets. This is based on the Next Reaction Method[4] of Gibson and Bruck. 

Here we’ll recount an explicit example that illustrates the concept. If we return to Figure 1 reproduced below, we can walk through Meyer and Tschudin’s scheduler implementation.

 

There are two queues, X and Y. Reaction 1 (Server 1) is bimolecular: X+Y \rightarrow Z, so the server pulls packets from two queues to execute the service. Reaction 2 (Server 2) is unimolecular, pulling only from queue Y. If we assume the reaction constants k_{1} = 1000/(\text{packet}\cdot s) and k_{2} = 1000/\text{s}, that X begins with two packets in its queue, and Y begins with 3 packets in its queue, then the reaction rates \nu_{r}, r=1,2 are respectively \nu_{1} = k_{1}c_{X}c_{Y} = 1000\cdot2\cdot3 = 6000 and \nu_{2} = k_{2}c_{Y} = 1000\cdot 3 = 3000. The occurrence time is the reciprocal of the reaction rate, so the occurrence times \tau_{r} are respectively \tau_{1} = \frac{1}{6} ms and \tau_{2} = \frac{1}{3} ms. That means the first server executes its action first, extracting packets from both X and Y. 

Since the occurrence time of r_{2} is coupled with r_{1} (both servers pull from queue Y), the action of r_{1} requires a rescheduling of r_{2}. After r_{1} pulls a packet each from X and Y, there is 1 packet left in X and 2 in Y, which means we have to recalculate the rate \nu_{2} = 1000\cdot 2 = 2000. The occurrence time of r_{2} is at ms \frac{1}{3}, so its time of execution hasn’t arrived. But thanks for r_{1}‘s effect, we have to rescale and reschedule the occurrence time of r_{2}. This is done by the following:

\tau_{r,\text{new}}-\frac{\nu_{r,\text{new}}}{\nu_{r,\text{old}}}(\tau_{r,\text{old}}-t_{\text{now}}) + t_{\text{now}},

where (\tau_{r,\text{old}} -t_{\text{now}}) is the time remaining between the original execution time and the current time. The multiplier in front is a scaling effect. 

In this example, at t_{\text{now}} = 1/6 ms, r_{2} was supposed to go at time 1/3 ms, but will now be prolonged. 

A note here, I did the math for their specific example, and it seems off. I think the multiplier should be as I’ve written above. The authors wrote the reciprocal, which prolongs too far. I’ll work to contact the authors to verify this. 

There are other timed scheduling algorithms utilized in computer networking, such as Earliest Deadline First, which require tagging each packet with a timestamp. This scheduler does not require such an imposition. 

The Chemical Control Plane

Here, the authors describe what they term as a chemical control plane that is intended to avoid the messy necessity of sending packets through a complex queueing network in order to shape packet flow as desired. The control plane takes advantage of concepts in enzymatic chemical reactions in order to control flow. This is a different application than the flow networks discussed thus far (as I understand it).

 

Here the forwarding plane which executes actions is separated from the control plane which will shape the flow of packets in the forwarding plane. 

The chemical control plane will dynamically determine the service rates; the servers do not have them predefined. There are some number of FIFO queues n, one for each type of ingress packet flow and they are drained by one server each, representing a unimolecular reaction. In the control plane, each queue is represented by an input species X_{i} and product species X_{i}^{*}. The chemical reaction network lives abstractly in the control plane, which is designed by a traffic engineer and can look like any digraph or network he wishes. 

Here we note the difference here between the prior sections, which dealt with physical flows modeled by a chemical reaction network, and moving the chemical reaction network to an abstract control plane. The queues now are not necessarily physically linked together, but we can choose to couple them abstractly to shape traffic. 

When a packet physically enters one of the queues, the control plane injects one instance of the corresponding molecule species into the abstract network. The scheduler described previously is implemented and eventually an instance of the output species is generated. Once this happens, the corresponding server in the forwarding plane physically processes the packet and dequeues the next. The advantage here is that the abstract molecules in the control plane have no payload, so implementation of this model only requires storing an integer value for each species that keeps track of the number of packets in each queue. This allows analysis of behavior at the design phase. 

In the simplest case, a unimolecular reaction X \to X^{*} in the chemical control plane acts like a low-pass filter to the packet flow, smoothing bursts with high frequency components. If the differential equation \dot{x} = \lambda-kx that approximates a unimolecular reaction is converted to the frequency domain via the Laplace transform, the transfer function F(s) has a cut-off frequency at k, the reaction constant:

F(s) = \frac{\mu(s)}{\lambda(s)} = \frac{k}{s+k}

That is, higher-frequency flows will be attenuated, much like dark glasses do with sunlight. Applying this filter at an ingress point of a network leads to less chaotic traffic patterns, but with a cost of a delay \frac{1}{k} and memory to buffer the packets. Therefore, the mean queue length for this single queue will grow proportionally with the delay and flow rate. That is, \hat{x} = \frac{\lambda}{k}.

Another consideration of the LoMA queues described by Meyer and Tschudin that differs from the standard M/M/1 queuing models is that the service rate is ultimately unbounded (for infinite capacity queues/networks), since it is proportional to the queue length. This is undesirable to allow in a network, and thus the authors borrow from biological systems and design an abstract enzymatic reaction to limit the rate of packet flow.

In biological systems, enzymes bind to reactant molecules X, called substrates in order to prevent a particular molecule from reacting immediately. Some amount of enzyme molecules E exist, and they can either exist free-form or bound in a complex (EX). The more enzyme molecules in bound form, the slower the rate of transmission grows for an increasing arrival rate. At equilibrium, the influx and efflux of substrate-enzyme complex molecules are equal according to Kirchoff’s Law, so 

k_{w}c_{X}c_{E} = k_{s}c_{EX}

Take a look at Figure 8 above in the chemical control plane to see this action. The number of enzymes is constant, so c_{E} + c_{EX} = e_{0}, which yields the Michaelis-Menten equation, expressing the transmission rate \mu in terms of the queue length c_{X}.

\mu = \nu_{\max}\frac{c_{X}}{K_{M} + c_{X}},

which yields a hyperbolic saturation curve. \nu_{\max} = k_{s}e_{0}, and K_{M} = \frac{k_{s}}{k_{w}} and specifies the concentration of X at which half of \nu_{\max} is reached. 

When the queue length at queue X is high, the transmission rate converges to \nu_{\max}, and behaves like a normal unimolecular reaction when queue length is short. 

The authors also extend this model to handle dynamic changes to the topology of the queuing network, which means that instances of queues and flow relations can be generated “on the fly,” as it were. Tschudin[13] has created an executable string and multiset rewriting system called Fraglets that allow for the implementation and running of protocols based on the ideas put forth thus far. They describe in the paper how to implement explicitly the enzymatic rate-limiter in the chemical control plane in Figure 8. In this implementation, rather than flow interactions being static and determined at the design phase, each fraglet (packet) sorts itself into a queue. After a packet is serviced, the header of a fraglet is treated as code, allowing a packet to determine its route comparable to active networking. The relationship between the abstract model and execution layer remains, which allows a mathematical model of the behavior of a Fraglets implementation to be generated automatically, and a queuing network to be design and then realized easily in Fraglets language. 

Continuation

The final part of this work will discuss an application of artificial packet chemistry to the implementation of a congestion control algorithm, briefly discuss design motifs, and conclude. 

References

  1. Dittrich, P., Ziegler, J., and Banzhaf, W. Artificial chemistries – a review. Artificial Life 7(2001), 225–275.
  1. Feinburg, M. Complex balancing in general kinetic systems. Archive for Rational Mechanics and Analysis 49 (1972).
  2. Gadgil, C., Lee, C., and Othmer, H. A stochastic analysis of first-order reaction networks. Bulletin of Mathematical Biology 67 (2005), 901–946.
  3. Gibson, M., and Bruck, J. Effcient stochastic simulation of chemical systems with many species and many channels. Journal of Physical Chemistry 104 (2000), 1876–1889.
  4. Gillespie, D. The chemical langevin equation. Journal of Chemical Physics 113 (2000).
  5. Gillespie, D. The chemical langevin and fokker-planck equations for the reversible isomerizationreaction. Journal of Physical Chemistry 106 (2002), 5063–5071.
  6. Horn, F. On a connexion between stability and graphs in chemical kinetics. Proceedings of the RoyalSociety of London 334 (1973), 299–330.
  7. Kamimura, K., Hoshino, H., and Shishikui, Y. Constant delay queuing for jitter-sensitive iptvdistribution on home network. IEEE Global Telecommunications Conference (2008).
  8. Laidler, K. Chemical Kinetics. McGraw-Hill, 1950.
  9. McQuarrie, D. Stochastic approach to chemical kinetics. Journal of Applied Probability 4 (1967), 413–478.
  10.  Meyer, T., and Tschudin, C. Flow management in packet networks through interacting queues and law-of-mass-action-scheduling. Technical report, University of Basel.
  11.  Pocher, H. L., Leung, V., and Gilles, D. An application- and management-based approach to atm scheduling. Telecommunication Systems 12 (1999), 103–122.
  12. Tschudin, C. Fraglets- a metabolistic execution model for communication protocols. Proceedings of the 2nd annual symposium on autonomous intelligent networks and systems (2003).

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Exploiting Chemistry for Better Packet Flow Management 3: Formal Analysis

Exploiting Chemistry for Better Packet Flow Management 3: Formal Analysis

The previous two posts introduced the ideas of Meyer and Tschudin [11] involving the application and exploitation of chemical kinetic theory to flow management in computer networking. The first part introduced the ideas and gave an overview of the entire work, and the second part took a deeper look into the formal model of a packet chemistry. This section discusses the analysis options available once a packet chemistry model has been created. 

This section can also be skipped for those less interested in the formal mathematics. Suffice it to say that there are a multitude of already created methods now available for the elegant analysis of computer networks when modeled by an artificial packet chemistry.

Formal Analysis of Artificial Packet Chemistry

By representing packet flow in a computer network as an artificial chemistry, a multitude of analyses are available, from high to low granularity. The authors give a heavily brief survey (and a good bibliography) of works that can be utilized to analyze these networks pulled from the physics and chemistry literature. A particular advantage of this method is the ability to study the transient states of the network rather than just steady states. The authors also claim the ability to determine the stability of the network flow based only on topology, a heavy advantage in design.

Stochastic Analysis at the Microscopic Level

The stochastic behavior of chemical reaction networks is described by the chemical master equation[10] which takes the form
\frac{\text{d}\mathbf{P}}{\text{d}t} = \mathbf{A}\mathbf{P}

which is a differential equation describing the evolution of state probabilities for a system. Here the states are discrete, and time is continuous. The matrix \mathbf{A} describes the transition rates (which can also be kinetic or reaction rates), and the stochastic process described is a Markov jump-process Since we’re on a network, the Markov jump process exists in an \mathcal{S}-dimensional integer lattice. Some work has been done to analyze several classes of chemical reaction networks to find the steady-state probability distribution of the state space. For example, if the total number of packets in the network has a bound, and the network contains only first order (unimolecular to unimolecular) reactions, the steady state probability distribution for the lengths of the queues in the network is a multinomial distribution[3]. On the other hand, if the network is open (we allow packets to exit the network completely), then the steady state probability distribution of the lengths of the queues follows a product of Poisson distributions (which is also Poisson)[3]. (This is an extremely desirable property, called a product-form.)

Deterministic Approximations

This is the most common approach utilized in computer network analysis today, simply because networks are so large and complex that stochastic modeling becomes too cumbersome. Here, the average trajectory is represented by a system of ordinary differential equations, building a fluid model. One downside to this in the networking space is that the analysis of protocols by this method requires manual extraction from source code and accuracy is uncertain.

In the chemistry sector (and now in the packet chemistry model), obtaining a fluid approximation is not only easier, but shown to be accurate. There are links between the stochastic master equation to several approximations[5,6] including a deterministic ODE model. Gillespie[5] showed that the ODE model accurately predicts the network flow trajectory in many cases.

One thing the authors note here is that the ODE model can be directly and automatically generated from the network topology. For example, a single server with a single queue (M/M/1) is simply modeled as one chemical species X. The arrival rate (inflow) is \lambda, and the service rate is proportional to the queue length, so \mu = kx, where x is the queue length. Then we get a simple differential equation
\dot{x} = \lambda-kx
describing the change in queue length as the difference of inflow and outflow. In the steady state, \dot{x} = 0, which lets us look for a fixed point \hat{x} = \frac{\lambda}{k}. This is the steady-state queue length, which allows us to derive the expected waiting time T = \frac{1}{k}, showing that the latency of a packet under this model is independent of the arrival rate and fill level. This model when implemented automatically adjusts the service rate such that in the steady state, every packet sees the same latency.

It’s also important to determine just how stable this steady state is by analyzing the sensitivity of the network and states to perturbations. The authors list several citations to show that no new approaches are needed to do this; one can look to signal and control theory literature. In particular, a network designer would desire to predict the stability of a complex network by studying the topology as opposed to an analysis of the system of ODEs. Fortunately, modeling a network this way allows for the use of the Deficiency Zero Theorem for complex chemical networks that gives conditions for stability of steady-state[2,7].

The authors give a formal convergence proof that the example network above converges to a stable fixed point and is asymptotically stable, comparing it to the proof of a similar protocol Push-Sum (a gossip protocol in computer networks).

Continuation

The next post in this series will discuss Meyer and Tschudin’s implementation of a scheduler based on the principles discussed thus far. 

References

  1. Dittrich, P., Ziegler, J., and Banzhaf, W. Artificial chemistries – a review. Artificial Life 7(2001), 225–275.
  1. Feinburg, M. Complex balancing in general kinetic systems. Archive for Rational Mechanics and Analysis 49 (1972).
  2. Gadgil, C., Lee, C., and Othmer, H. A stochastic analysis of first-order reaction networks. Bulletin of Mathematical Biology 67 (2005), 901–946.
  3. Gibson, M., and Bruck, J. Effcient stochastic simulation of chemical systems with many species and many channels. Journal of Physical Chemistry 104 (2000), 1876–1889.
  4. Gillespie, D. The chemical langevin equation. Journal of Chemical Physics 113 (2000).
  5. Gillespie, D. The chemical langevin and fokker-planck equations for the reversible isomerizationreaction. Journal of Physical Chemistry 106 (2002), 5063–5071.
  6. Horn, F. On a connexion between stability and graphs in chemical kinetics. Proceedings of the RoyalSociety of London 334 (1973), 299–330.
  7. Kamimura, K., Hoshino, H., and Shishikui, Y. Constant delay queuing for jitter-sensitive iptvdistribution on home network. IEEE Global Telecommunications Conference (2008).
  8. Laidler, K. Chemical Kinetics. McGraw-Hill, 1950.
  9. McQuarrie, D. Stochastic approach to chemical kinetics. Journal of Applied Probability 4 (1967), 413–478.
  10.  Meyer, T., and Tschudin, C. Flow management in packet networks through interacting queues and law-of-mass-action-scheduling. Technical report, University of Basel.
  11.  Pocher, H. L., Leung, V., and Gilles, D. An application- and management-based approach to atm scheduling. Telecommunication Systems 12 (1999), 103–122.
  12. Tschudin, C. Fraglets- a metabolistic execution model for communication protocols. Proceedings of the 2nd annual symposium on autonomous intelligent networks and systems (2003).

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

Exploiting Chemistry for Better Packet Flow Management 2: Formal Model

Exploiting Chemistry for Better Packet Flow Management 2: Formal Model

This post is the second breaking down a report/review of a technical report by Meyer and Tschudin [11] that modifies the formal notion of an artificial chemistry and creates an artificial packets chemistry with the goal of designing better flow management by exploiting the natural behavior of chemical reactions. 
 
Note: for those more interested in the application and implementation review and discussion, this section can be skipped. 
 

Formal Model of Artificial Packet Chemistry

Artificial Chemistry

The notion of an formal artificial chemistry has been around for some time. There is an excellent paper by Dittrich, Ziegler, and Banzhaf that gives a survey of the work in this area[1]. Put simply, an artificial chemistry is a tuple (\mathcal{S},\mathcal{R},\mathcal{A}) where \mathcal{S} = \{s_{1},s_{2},\ldots s_{n}\} is a set of all valid molecules, \mathcal{R} is a set of rules r that describe interactions between molecules, and \mathcal{A} is the reactor algorithm that determines how the set of rules in \mathcal{R} is applied to a collection of molecules termed \mathcal{P}. \mathcal{P} may be a reaction vessel, reactor, or “soup” (as Dittrich et al call it). It’s also notable that \mathcal{P} cannot be identical to \mathcal{S}.1
 
To expand a bit more, we’ll note that the rules r \in \mathcal{R} all take the form
 
s_{1} + s_{2} + \ldots s_{n}\longrightarrow \tilde{s}_{1}+\tilde{s}_{2}+\ldots \tilde{s}_{m}
 
where all s, \tilde{s} \in \mathcal{S}. These rules are fairly abstract, and don’t explicitly seem to describe just a reactant to product type reaction. These can be collisions or other types of interactions. The set \mathcal{S} of valid molecules presumably can be partitioned into disjoint subsets of different species of molecule as well, though the representation in [1] is more general. 
 
Regarding the reactor algorithm \mathcal{A}, Dittrich et al [1]  give several different descriptions/approaches by which it can be defined, depending on whether each molecule is treated explicitly, or all molecules of a type are represented by a single number (frequency or concentration):
 
  1. Stochastic Molecular Collisions.  Every single molecule is worked with, where a sample of molecules from the reaction vessel \mathcal{P} is drawn and the algorithm checks to see if a particular rule r \in \mathcal{R} applies. 
  2. Differential Rate Equations: This approach seeks to describe the dynamics of a chemical system using concentrations of molecular species. The rules under this algorithm take a species approach:  r: a_{1}s_{1} + a_{2}s_{2} + \ldots a_{N}s_{N} \longrightarrow b_{1}s_{1} + b_{2}s_{2} + \ldots + b_{N}s_{N}
    Here, the s_{i}‘s are species, not individual molecules. The coefficients are stoichiometric factors of the reaction. They are simply indicator functions to denote whether species s_{i} is a reactant or product. That is a_{i} = 1 if and only if s_{i} is a reactant in the rule r, and b_{i} = 1 if and only if s_{i} is a product in the rule r. It is this form of \mathcal{A} that Meyer and Tschudin [11] utilize in their packet chemistry. 
     
    The change of overall concentration (concentration denoted c_{s_{i}}) is given by a system of differential equations
    \frac{\text{d}c_{s_{i}}}{\text{d}t} = (b_{i}-a_{i})\prod_{j=1}^{N}c_{s_{j}}^{a_{j}}, \quad i=1,\ldots,N
    according to the Law of Mass Action discussed earlier. There may be multiple rules/reactions r \in \mathcal{R} that affect the concentration of species s_{i}, so 
    \frac{\text{d}c_{s_{i}}}{\text{d}t} = \sum_{r\in \mathcal{R}}\left[(b_{i}^{r}-a_{i}^{r})\prod_{j=1}^{N}c_{s_{j}}^{a_{j}^{r}}\right], \quad i=1,\ldots,N
  3. Others: There are other options, such as metadynamics (where the number of species and thus differential equations may change over time), mixed approaches, or symbolic analysis of the differential equations. As this article would be far too cumbersome to discuss these, they are omitted, but may be found in [1].
 
 
 
 
According to Dittrich et al [1], the reactor algorithm \mathcal{A} depends on the representation of the elements of s_{i} and thus the population. Meyer and Tschudin [11] utilize the second approach, though they do not explicitly state this. 
 

Artificial Packet Chemistry

 
Meyer and Tschudin adapt the artificial chemistry in the previous section to suit their queuing networks in a given computer network. They add an element \mathcal{G} to the artificial chemistry tuple to get an artificial packet chemistry PC = (\mathcal{G},\mathcal{S},\mathcal{R},\mathcal{A}), where \mathcal{G} is the digraph that gives the topology of the computer network. \mathcal{G} consists of a set of nodes V_{\mathcal{G}} which represent the chemical reaction vessels. (These were the \mathcal{P} in the previous section.), and E_{\mathcal{G}} is the set of directed arcs that represent network links connecting adjacent nodes.2
 
Here, since a molecular species is the analogue of a queue, \mathcal{S} = \cup_{i \in V}\{S_{i}\}. Here, \{S_{i}\} is a set of all queue instances in a particular node i. At this point, some discussion on clarity is warranted. It is possible to have more than one queueing instance (more than one molecular species) inside each reaction vessel (here the nodes of the network). I don’t think this is meant to be a disjoint union, since a reaction species can show up in more than one reaction vessel, so there may be repeats of certain species in this representation of \mathcal{S} when written this way. Perhaps it’s just a nitpick, but it’s worth mentioning.  
 
\mathcal{R} = \cup_{i \in V}\{R_{i}\} gives all the flow relations among the queues. Here the rules r take form 2 of \mathcal{A}:
 
r \in R_{i}: \sum_{s \in S_{i}}a_{s,r}s \longrightarrow \sum_{s \in S_{i} \cup \{S_{j}\} : j \in N_{i}}b_{s,r}s
 
The reaction rules basically describe what’s going on in a particular reaction vessel. We can send packets to neighboring nodes/vessels (N_{i} is the notation for the neighborhood of node i, or the set of adjacent nodes), or we can keep packets in the same node after the reaction is done. The reactions that send packets to neighboring nodes are transmissions. 
 
The mean reaction rate \nu_{r} of each reaction is given by the Law of Mass Action as applied to forward reactions:
 
\nu_{r} = k_{r}\prod_{s\in S}c_{s}^{a_{s,r}}
 
just as described in the previous section. 
 
 
 

Figure 3 from Meyer and Tschudin [11] gives an explicit example to help solidify these abstract ideas. The network consists of 4 nodes, so V = \{n_{1}, n_{2}, n_{3}, n_{4}\}. Each node has a bidirectional link with its neighbors, so E = \{n_{1}n_{2}, n_{2}n_{1}, n_{2}n_{3}, n_{3}n_{2}, n_{2}n_{4}, n_{4}n_{2}, n_{3}n_{4}, n_{4}n_{3}\}. In this case, we only have one species of molecule (one queue) per node, so \mathcal{S} = \{X_{1}, X_{2}, X_{3}, X_{4}\}. The set of reactions is simply a first-order reaction per arc: \mathcal{R} = \{r_{a,b}: X_{a} \to X_{b}: ab \in E\}

From a review standpoint, I would have liked to see a less trivial example, such as one with multiple queues in a node, and rules that may keep packets in a node instead of just transmitting. These types of scenarios would be interesting to model this way, and demonstrate better the power of this approach.

Continuation

The next post in the series will discuss the mathematical analyses of the artificial packet chemistry described here. 

 

References

  1. Dittrich, P., Ziegler, J., and Banzhaf, W. Artificial chemistries – a review. Artificial Life 7(2001), 225–275.
  1. Feinburg, M. Complex balancing in general kinetic systems. Archive for Rational Mechanics and Analysis 49 (1972).
  2. Gadgil, C., Lee, C., and Othmer, H. A stochastic analysis of first-order reaction networks. Bulletin of Mathematical Biology 67 (2005), 901–946.
  3. Gibson, M., and Bruck, J. Effcient stochastic simulation of chemical systems with many species and many channels. Journal of Physical Chemistry 104 (2000), 1876–1889.
  4. Gillespie, D. The chemical langevin equation. Journal of Chemical Physics 113 (2000).
  5. Gillespie, D. The chemical langevin and fokker-planck equations for the reversible isomerizationreaction. Journal of Physical Chemistry 106 (2002), 5063–5071.
  6. Horn, F. On a connexion between stability and graphs in chemical kinetics. Proceedings of the RoyalSociety of London 334 (1973), 299–330.
  7. Kamimura, K., Hoshino, H., and Shishikui, Y. Constant delay queuing for jitter-sensitive iptvdistribution on home network. IEEE Global Telecommunications Conference (2008).
  8. Laidler, K. Chemical Kinetics. McGraw-Hill, 1950.
  9. McQuarrie, D. Stochastic approach to chemical kinetics. Journal of Applied Probability 4 (1967), 413–478.
  10.  Meyer, T., and Tschudin, C. Flow management in packet networks through interacting queues and law-of-mass-action-scheduling. Technical report, University of Basel.
  11.  Pocher, H. L., Leung, V., and Gilles, D. An application- and management-based approach to atm scheduling. Telecommunication Systems 12 (1999), 103–122.
  12. Tschudin, C. Fraglets- a metabolistic execution model for communication protocols. Proceedings of the 2nd annual symposium on autonomous intelligent networks and systems (2003).

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.

 
Using Boolean Algebra to Find all Maximal Independent Sets in a Graph

Using Boolean Algebra to Find all Maximal Independent Sets in a Graph

Graph theory may be one of the most widely applicable topics I’ve seen in mathematics. It’s used in chemistry, coding theory, operations research, electrical and network engineering, and so many other places. The subject is mainly credited to have begun with the famous  Seven Bridges of Königsberg problem posed by Leonard Euler in 1736. Frank Harary should also be credited with his massive work in bringing applications of graph theory to the sciences and engineering with his famous textbook written in 1969. 

My own research forced me to stumble into this area once my  research partner, Jason Hathcock, suggested we explore the idea of viewing dependency relations in the sequences of variables we were studying as digraphs. Since then, I’ve been buried in graph theory texts, finding a wealth of fascinating topics to explore.

Of this article’s particular interest is finding all maximally independent sets in a graph using Boolean algebra. 

What’s a maximally independent set?

Firstly, what’s an independent set? 


Definition (Independent Set): A set of vertices of a graph is independent if no two vertices in the set are adjacent. 



If we take a look at the digraph above (from our paper on vertical dependence), and look at the underlying graph1, \{1,6,11\} form an independent set, as an example. There are lots more, and of varying sizes. Of particular interest here are maximal independent sets.


Definition:(Maximal Independent Set): An independent set to which no other vertex in the graph can be added to retain the independence property


An example from the graph above is \{2,3,4,5,13\}. If we added any other vertex to that set, it would be adjacent to some vertex already in there. 

A few notes:

(1) There are many maximal independent sets in a graph, and they may not all have the same cardinality. 

(2) Maximal and maximum are not the same thing. An independent set may be a maximal independent set without being the largest independent set in the graph. The largest cardinality among all the maximal independent sets is called the independence number of the graph and is denoted \beta(G).

Why do we care about maximal independent sets?

Of the many applications that arise, one in particular is in coding theory. We want to find the largest error correcting codes we can, particularly in internet transmissions that can lose packets. A paper discussing this can be found here. (Paywall warning). We’ve discussed some basics of coding theory on this site as well. Finding error correcting codes with desirable properties is equivalent to solving the problem of finding maximal independent sets. The purpose of this article isn’t to discuss the applications here, but I’ve learned long ago that no one will keep reading unless I mention at least one application. 

Finding a maximal independent set

Dependency Graph for a monotonic example

Finding a maximal independent set is relatively simple. Start with any vertex v \in V(G). Add another vertex u that is not adjacent to v. Continue adding vertices that are not adjacent to any already in the set. For a finite graph2, this process will terminate and the result will be a maximally independent set. 

Will it be one of largest cardinality? Not necessarily. 

For example, using one more of our dependency graphs generated by \alpha(n) = \sqrt{n}, we can take the order to be 24 as shown, and build a maximal independent set starting with vertex 3. Note that none of vertices 9-15 or 1 can be in the set, since they’re all adjacent to vertex 3. Vertex 2 is not adjacent to vertex 3, so we add it into our set: V = \{2,3\}. Now, the next vertex we add can’t be adjacent to either 2 or 3, so that rules out 1, 9-15, and 4-8. Grab vertex 16. Now V = \{2,3,16\}. Notice that none of the remaining vertices are adjacent to any of the previous vertices. Continuing this process, we’ll get that V = \{2,3,16,17,18,19,20,21,22,23,24\}. Notice that if we add any other vertices to this set, they’ll be adjacent to something already in it. 

Finding all Maximal Independent Sets

We’re rarely interested in just finding one maximal independent set. We’d prefer to find them all, and doing it by inspection is not very palatable. The heart of the article is an admittedly not optimal but still interesting way to find all maximal independent sets for reasonably small graphs. 

Image credit: https://www.geeksforgeeks.org/mathematics-graph-theory-basics/

We’ll illustrate the method on the 6-node graph above. 

Getting started

First, we’ll assign a Boolean variable to each vertex according to its inclusion in a maximal independent set. For example A = 1 implies A is in the maximal independent set. Recall from Boolean algebra that 

x+y = \left\{\begin{array}{lr}1, & x = 1 \text{ or } y = 1 \text{ or } (x=y=1)\\0,&x=0 \text{ and } y=0\end{array}\right.

Remark: x+y is just another way of writing a union. This isn’t addition mod 2 here. 

xy=\left\{\begin{array}{lr}1, & x = 1 =y\\0,&\text{ otherwise}\end{array}\right.

What we’ve done here is set up inclusion into our maximal independent sets in a Boolean fashion. So x+y = 1 corresponds to the inclusion of either vertex x OR vertex y OR both vertices x and y. Similarly, xy = 1 corresponds to the inclusion of both vertices x and y.

Now, we can express an edge of a graph as a Boolean product xy, where x and y are the vertices at either end of the edge. 

Finally, set up the sum of all edges and call it \phi:

\phi = \sum xy \text{ for all } (x,y) \in G

For our graph above,

\phi = AB + AD + AE + BC + CE + CF + DE + EF

Why did we do this?

For a vertex to be in an independent set, it can’t be adjacent to any other vertices in the set. Put another way, for each edge, we can only have at most one of the vertices that make it up. If we include A in the independent set V, then B cannot be in there. 

Returning to our \phi, note that its value under Boolean algebra can only be 0 or 1. If \phi = 1, then at least one edge has both of its vertices “on”. This means, only combinations of A, B, C, D, E, F that yield \phi = 0 will give us a maximally independent set. 

Solving the problem

Our goal now is to find all combinations of our Boolean vertex variables that yield \phi = 0. As it turns out, solving this directly is pretty annoying3. If we want \phi = 0, that’s logically equivalent to seeking \phi^{c} = 1, where \phi^{c} is the Boolean complement (or negation) of \phi

Recall from Boolean algebra the following4:

\begin{aligned}(xy)^{c}&=x^{c}+y^{c}\\(x+y)^{c} &= x^{c}y^{c}\end{aligned}

So, if we take \phi^{c} for our graph above, 

\begin{aligned}\phi^{c}&=(A^{c}+B^{c})(A^{c}+D^{c})(A^{c}+E^{c})(B^{c}+C^{c})(C^{c}+E^{c})\\&\quad(C^{c}+F^{c})(D^{c}+E^{c})(E^{c}+F^{c})\end{aligned}

What does the negation here actually mean? By taking the complement, instead of finding vertices to include, now we’re finding vertices to excludeWhen we multiply this expression out, we’ll get a sum of terms, where each term is a product of complements of our original Boolean variables. To get \phi^{c} = 1, all we need is one of those terms to be 1. To get a term to be 1, all members of the product must themselves be 1, meaning each term gives us a set of variables to exclude. Excluding these variables gives us one maximally independent set for each term, so this gives us all the maximally independent sets. 

The nice thing about dealing with Boolean arithmetic is that we can program a computer to do this for us. Any time we can invoke a relationship with Boolean algebra, we can enlist a friendly helpful computer. 

Finishing the example

We’ll do it by hand here, because I’m old-school like that. For larger graphs, obviously one would want to enlist some computational help, or just be very patient. We’ll remember a few other rules for Boolean algebra before we finish5:

\begin{aligned}xx &=x\\x+x &=x\\x +xy&=x\end{aligned}

After an insane amount of tedious Boolean algebra,

\phi^{c} = A^{c}C^{c}E^{c}+A^{c}B^{c}E^{c}F^{c}+A^{c}C^{c}D^{c}F^{c}+B^{c}C^{c}D^{c}E^{c}+B^{c}D^{c}E^{c}F^{c}

Recall that each term now tell us which sets of vertices to exclude from a maximal independent set. We negated the question logically. That means we have 5 maximal independent sets:

\{B,D,F\}, \{C,D\}, \{B,E\}, \{A,F\}, \{A,C\}

We can actually say what the independence number is as well, since we just have to find the maximum cardinality among the sets listed. For this graph, \beta(G) = 3.

Conclusion

I happened to find this interesting, and ended up obsessed with it for a day, much to the chagrin of my daily planner, which expected me to be working on writing my research monograph. I tried several different ways of solving this beyond the one given. I tried using the direct equation \phi, and tried using regular arithmetic on just \{0,1\}, setting up a sort-of structure function similar to the reliability block diagrams detailed here. 

I always hesitate to blame the method, and rather my own arithmetic errors, but I didn’t have much luck with the structure-function method, though I may try again to see if it’s an equivalent method. I believe it should be. 

Looking at \phi^{c} makes more sense after playing with this problem for some hours. The sum/union is quite nice, because it neatly separates out the various sets to exclude. It’s a better exploitation of Boolean algebra than trying to work with \phi but aiming for a sum of 0. I still think it should be possible to work with it directly, even if not advisable. If I decide to torture myself with it further, and end up with something to write about, perhaps I’ll append here.

I always end up ending my articles with some takeaway. I don’t have much of one here, except it was a curiosity worth sharing. Perhaps a decent takeaway is to reveal a bit of the tedium and dead-ends mathematicians can run into when exploring something. That’s just part of research and understanding. It’s entirely possible to spend hours, days, weeks on something and all you conclude is that the original method you saw is definitely superior than the one you were trying to develop. 

Simulating Soundscapes Using Convolutions

Simulating Soundscapes Using Convolutions

One of the most powerful areas of electrical engineering that flourished in the 20th century is the field of signal processing. The field is broad and rich in some beautiful mathematics, but by way of introduction, here we’ll take a look at some basic properties of signals and how we can use these properties to find a nice compact representation of operations on them. As a motivating application, we’ll use what we study today to apply certain effects to audio signals. In particular, we’ll take a piece of audio, and be able to make it sound like it’s being played in a cathedral, or in a parking garage, or even through a metal spring.

First things first: what is a signal? For this discussion we’ll limit ourselves to looking at the space \ell = \{x :\mathbb{Z} \rightarrow \mathbb{R}\} – the set of functions which take an integer and return a real number. Another way to think of a signal then is as an infinite sequence of real numbers. We’re limiting ourselves to functions where the domain is discrete (the integers), rather than continuous (the real numbers), since in many applications we’re looking at signals that represent some measurement taken at a bunch of different times1. It’s worth noting that any signal that’s been defined on a countable domain \{..., t_{n-1}, t_n, t_{n+1},...\} can be converted to one defined on the integers via an isomorphism. We like to place one further restriction on the signals, in order to make certain operations possible. We restrict the space to so-called finite-energy signals:

\ell_2 = \left\{x \in \ell : \sum_{n = -\infty}^{\infty} |x(n)|^2 < \infty\right\}.

This restriction makes it much easier to study and prove things around these functions, while still giving us lots of useful signals to study, without having to deal with messy things like infinities. In practice, when dealing with audio we usually have a signal with a finite length and range, so this finite-energy property is trivially true.

Studying signals is only as useful if we can also define operations on them. We’ll study the interaction of signals with systems, which take one signal and transform it into another – essentially, a function operating on signals. Here, we’ll say that a system H : \ell_2 \rightarrow \ell_2 takes an input signal x(t) and produces output signal H\{x(t)\} = y(t).

Linearity and Time Invariance

There are certain properties that are useful for systems to have. The first is linearity. A system H is considered linear if for every pair of inputs x_1, x_2 \in \ell_2, and for any scalar values \alpha, \beta \in R, we have

H\{\alpha x_1 + \beta x_2\} = \alpha H\{x_1\} + \beta H\{x_2\}

This is very useful, because it allows us to break down a signal into simpler parts, study the response of the system to each of those parts, and understand the response to the more complex original signal.

The next property we’re going to impose on our systems is time-invariance:

\forall s \in \mathbb{Z}, H\{x(n)\} = y(n) \Rightarrow H\{x(n-s)\} = y(n-s)

This means that shifting the input by s corresponds to a similar shift in the output. In our example of playing music in a cathedral, we expect our system to be time-invariant, since it shouldn’t matter whether we play our music at noon or at midnight, we’d expect it to sound the same. However, if we were playing in a building that, for example, lowered a bunch of sound-dampening curtains at 8pm every night, then the system would no longer be time-invariant.

So what are some more concrete examples of systems that are linear and time-invariant?
Let’s consider an audio effect which imitates an echo – it outputs the original signal, plus a quieter, delayed version of that signal. We might express such a system as

H_{\Delta, k}\{x(n)\} = x(n) + kx(n-\Delta)

where \Delta \in \mathbb{Z} is the time delay of the echo (in terms of the number of samples), and k \in \mathbb{R} is the relative volume of the echoed signal. We can see that this signal is time-invariant, because there is no appearance of the time variable outside of the input. If we replaced k by a function k(n) = \sin(n), for example, we would lose this time invariance. Additionally, the system is plainly linear:

\begin{aligned}H_{\Delta, k}\{\alpha x_1(n) + \beta x_2(n)\} &= \alpha x_1(n) + \beta x_2(n) + k x_1(n-\Delta) +k x_2(n-\Delta) \\&= H_{\Delta, k}\{x_1(n)\} + H_{\Delta, k}\{x_2(n)\}\end{aligned}

A common non-linearity in audio processing is called clipping — we limit the output to be between -1 and 1: H\{x(n)\} = \max(\min(x(n), 1), -1). This is clearly non-linear since doubling the input will not generally double the output.

The Kronecker delta signal

There is a very useful signal that I would be remiss not to mention here: the Kronecker delta signal. We define this signal as

\delta(n) = \begin{cases} 1 & n = 0 \\ 0 & n \neq 0 \end{cases}

The delta defines an impulse, and we can use it to come up with a nice compact description of linear, time-invariant systems. One property of the delta is that it can be used to “extract” a single element from another signal, by multiplying:

\forall s \in \mathbb{Z}, \delta(n-s)x(s) = \begin{cases} x(n) & n=s \\ 0 & n \neq s\end{cases}

Similarly, we can then write any signal as an infinite sum of these multiplications:

x(n) = \sum_{s=-\infty}^{\infty} \delta(n-s)x(s) = \sum_{s=-\infty}^{\infty}\delta(s)x(n-s)

Why would we want to do this? Let H be a linear, time-invariant system, and let h(t) = H\{\delta(t)\}, the response of the system to the delta signal. Then we have

\begin{aligned} H\{x(n)\} &= H\left\{\sum_{s=-\infty}^{\infty} \delta(n-s)x(s)\right\}\\&=\sum_{s=-\infty}^{\infty}H\{\delta(n-s)\}x(s) \text{ by linearity}\\&=\sum_{s=-\infty}^{\infty}h(n-s)x(s) \text{ by time-invariance}\end{aligned}

We can write any linear, time-invariant system in this form. We call the function h the impulse response of the system, and it fully describes the behaviour of the system. This operation where we’re summing up the product of shifted signals is called a convolution, and appears in lots of different fields of math 2.

Firing a Gun in The Math Citadel

The power of this representation of a system is that if we want to understand how it will act on any arbitrary signal, it is sufficient to understand how it responds to an impulse. To demonstrate this, we’ll look at the example of how audio is affected by the environment it is played in. Say we were a sound engineer, and we wanted to get an instrument to sound like it was being played in a big, echoing cathedral. We could try to find such a place, and actually record the instrument, but that could be expensive, requiring a lot of setup and time. Instead, if we could record the impulse response of that space instead, we could apply the convolution to a recording we did back in a studio. How do we capture an impulse response? We just need a loud, very short audio source – firing a pistol or popping a balloon are common. To demonstrate, here are some example impulse responses, taken from OpenAirLib, and their effects on different audio signals.

First, here is the unprocessed input signal – a short piece of jazz guitar:

Here is the same clip, as if it were played in a stairwell at the University of York. First, the impulse response, then the processed audio.

That sounds a little different, but we can try a more extreme example: the gothic cathedral of York Minster. Again, here is the impulse response, followed by the processed signal.

In this case, we have a much more extreme reverberation effect, and we get the sound of a guitar in a large, ringing room. For our last example, we’ll note that impulse responses don’t have to be natural recordings, but instead could be entirely synthetic. Here, I’ve simply reversed the first impulse response from the stairwell, which creates this pre-echo effect, which doesn’t exist naturally.

This is just one of the most basic examples of what can be done with signal processing, but I think it’s a particularly good one – by defining some reasonable properties for signals and systems, we’re able to derive a nice compact representation that also makes a practical application very simple.


Notice: ob_end_flush(): failed to send buffer of zlib output compression (0) in /home/theresea/public_html/wp-includes/functions.php on line 4217