# Extensions of the Single Server Efficiency Model

##### For the full paper, which includes all proofs, click here.

## Abstract

*Editor’s note: this paper comprises the third chapter of the PhD dissertation by Rachel Traylor. Visit here and here to see chapters one and two, respectively.* Herein we further generalize the single server model presented in [3]. In particular, we consider a multichannel server under the cases of both singular and clustered tasks. In the instance of singular tasks, we present a load balancing allocation scheme and obtain a stochastic breakdown rate process, as well as derive the conditional survival function as a result. The model of a multichannel server taking in clustered tasks gives rise to two possibilities, namely, independent and correlated channels. We derive survival functions for both of these scenarios.

## Load Balancing Allocation for a Multichannel Server

### Model Description

Previously, we had assumed that a web server functions as a single queue that attempts to process jobs as soon as they arrive. These jobs originally brought a constant stress \eta to the server, with the system stress reducing by \eta at the completion of each job.

Now, suppose we have a server partitioned into K channels. Denote each channel as Q_k, k = 1,\ldots,K. Jobs arrive via a nonhomogenous Poisson process with rate \lambda(t). Upon arrival, each job falls (or is routed) to the channel with the shortest queue length. If all queue lengths are equal or multiple channels have the shortest length, the job will enter one of the appropriate queues with equal probability.

We retain the previous notation for the baseline breakdown rate, or hazard function. This is denoted by r_0(t) and is the hazard function under an idle system. We also retain the assumption that the arrival times \mathbf{T} are independent. In addition, the service times \mathfrak{W} are i.i.d. with distribution G_W(w). We assume that all channels are serving jobs at the same time, i.e., a job can be completed from any queue at any time. We do not require load balancing for service. In other words, any queue can empty with others still backlogged. We also retain the FIFO service policy for each queue.

Since we have now “balanced,” or distributed, the load of jobs in the server, not all jobs will cause additional stress to the system. Suppose all jobs bring the same constant stress \eta upon arrival. Under load balancing, we will define the additional stress to the system as \eta\max_k|Q_k|. **Figure 1** shows an example server with current stress of 4\eta.