ebook img

10 A Bayesian Approach to Multiple-Target Tracking PDF

31 Pages·2001·0.56 MB·English
Save to my drive
Quick download
Download
Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.

Preview 10 A Bayesian Approach to Multiple-Target Tracking

10 A Bayesian Approach to Multiple- Target Tracking* 10.1 Introduction Definition of Bayesian Approach • Relationship to Kalman Filtering 10.2 Bayesian Formulation of the Single-Target Tracking Problem Bayesian Filtering • Problem Definition • Computing the Posterior • Likelihood Functions 10.3 Multiple-Target Tracking without Contacts or Association (Unified Tracking) Multiple-Target Motion Model • Multiple-Target Likelihood Functions • Posterior Distribution • Unified Tracking Recursion 10.4 Multiple-Hypothesis Tracking (MHT) Contacts, Scans, and Association Hypotheses • Scan and Data Association Likelihood Functions • General Multiple- Hypothesis Tracking • Independent Multiple-Hypothesis Tracking 10.5 Relationship of Unified Tracking to MHT and Other Tracking Approaches General MHT Is a Special Case of Unified Tracking • Relationship of Unified Tracking to Other Multiple-Target Tracking Algorithms • Critique of Unified Tracking 10.6 Likelihood Ratio Detection and Tracking Basic Definitions and Relations • Likelihood Ratio Recursion • Log-Likelihood Ratios • Declaring a Target Present • Track- Lawrence D. Stone Before-Detect Metron Inc. References 10.1 Introduction This chapter views the multiple-target tracking problem as a Bayesian inference problem and highlights the benefits this approach. The goal of this chapter is to provide the reader with some insights and perhaps a new view of multiple-target tracking. It is not designed to provide the reader with a set of algorithms for multiple-target tracking. *This chapter is based on Bayesian Multiple Target Tracking, by Stone, L. D., Barlow, C. A., and Corwin, T. L., 1999. Artech House, Inc., Norwood, MA. www.artechhouse.com. ©2001 CRC Press LLC The chapter begins with a Bayesian formulation of the single-target tracking problem and then extends this formulation to multiple targets. It then discusses some of the interesting consequences of this formulation, including: • A mathematical formulation of the multiple-target tracking problem with a minimum of com- plications and formalisms • The emergence of likelihood functions as a generalization of the notion of contact and as the basic currency for valuing and combining information from disparate sensors • A general Bayesian formula for calculating association probabilities • A method, called unified tracking, for performing multiple-target tracking when the notions of contact and association are not meaningful • A delineation of the relationship between multiple-hypothesis tracking (MHT) and unified tracking • A Bayesian track-before-detect methodology called likelihood ratio detection and tracking. 10.1.1 Definition of Bayesian Approach To appreciate the discussion in this chapter, the reader must first understand the concept of Bayesian tracking. For a tracking system to be considered Bayesian, it must have the following characteristics: • Prior Distribution — There must be a prior distribution on the state of the targets. If the targets are moving, the prior distribution must include a probabilistic description of the motion charac- teristics of the targets. Usually the prior is given in terms of a stochastic process for the motion of the targets. • Likelihood Functions — The information in sensor measurements, observations, or contacts must be characterized by likelihood functions. • Posterior Distribution — The basic output of a Bayesian tracker is a posterior probability distribution on the (joint) state of the target(s). The posterior at time t is computed by combining the motion updated prior at time t with the likelihood function for the observation(s) received at time t. These are the basics: prior, likelihood functions, posterior. If these are not present, the tracker is not Bayesian. The recursions given in this chapter for performing Bayesian tracking are all “recipes” for calculating priors, likelihood functions, and posteriors. 10.1.2 Relationship to Kalman Filtering Kalman filtering resulted from viewing tracking as a least squares problem and finding a recursive method of solving that problem. One can think of many standard tracking solutions as methods for minimizing mean squared errors. Chapters 1 to 3 of Blackman and Popoli1 give an excellent discussion of tracking from this point of view. One can also view Kalman filtering as Bayesian tracking. To do this, one starts with a prior that is Gaussian in the appropriate state space with a “very large” covariance matrix. Contacts are measurements that are linear functions of the target state with Gaussian measurement errors. These are interpreted as Gaussian likelihood functions and combined with motion updated priors to produce posterior distributions on target state. Because the priors are Gaussian and the likelihood functions are Gaussian, the posteriors are also Gaussian. When doing the algebra, one finds that the mean and covariance of the posterior Gaussian are identical to the mean and covariance of the least squares solution produced by the Kalman filter. The difference is that from the Bayesian point of view, the mean and covariance matrices represent posterior Gaussian distributions on target state. Plots of the mean and characteristic ellipses are simply shorthand representations of these distributions. Bayesian tracking is not simply an alternate way of viewing Kalman filtering. Its real value is demon- strated when some of the assumptions required for Kalman filtering are not satisfied. Suppose the prior distribution on target motion is not Gaussian, or the measurements are not linear functions of the target state, or the measurement error is not Gaussian. Suppose that multiple sensors are involved and are quite ©2001 CRC Press LLC different. Perhaps they produce measurements that are not even in the target state space. This can happen if, for example, one of the measurements is the observed signal-to-noise-ratio at a sensor. Suppose that one has to deal with measurements that are not even contacts (e.g., measurements that are so weak that they fall below the threshold at which one would call a contact). Tracking problems involving these situations do not fit well into the mean squared error paradigm or the Kalman filter assumptions. One can often stretch the limits of Kalman filtering by using linear approximations to nonlinear measurement relations or by other nonlinear extensions. Often these extensions work very well. However, there does come a point where these extensions fail. That is where Bayesian filtering can be used to tackle these more difficult problems. With the advent of high-powered and inexpensive computers, the numerical hurdles to implementing Bayesian approaches are often easily surmounted. At the very least, knowing how to formulate the solution from the Bayesian point of view will allow one to understand and choose wisely the approximations needed to put the problem into a more tractable form. 10.2 Bayesian Formulation of the Single-Target Tracking Problem This section presents a Bayesian formulation of single-target tracking and a basic recursion for performing single-target tracking. 10.2.1 Bayesian Filtering Bayesian filtering is based on the mathematical theory of probabilistic filtering described by Jazwinski.2 Bayesian filtering is the application of Bayesian inference to the problem of tracking a single target. This section considers the situation where the target motion is modeled in continuous time, but the observations are received at discrete, possibly random, times. This is called continuous-discrete filtering by Jazwinski. 10.2.2 Problem Definition The single-target tracking problem assumes that there is one target present in the state space; as a result, the problem becomes one of estimating the state of that target. 10.2.2.1 Target State Space Let S be the state space of the target. Typically, the target state will be a vector of components. Usually some of these components are kinematic and include position, velocity, and possibly acceleration. Note that there may be constraints on the components, such as a maximum speed for the velocity component. There can be additional components that may be related to the identity or other features of the target. For example, if one of the components specifies target type, then that may also specify information such as radiated noise levels at various frequencies and motion characteristics (e.g., maximum speeds). In order to use the recursion presented in this section, there are additional requirements on the target state space. The state space must be rich enough that (1) the target’s motion is Markovian in the chosen state space and (2) the sensor likelihood functions depend only on the state of the target at the time of the observation. The sensor likelihood functions depend on the characteristics of the sensor, such as its position and measurement error distribution which are assumed to be known. If they are not known, they need to be determined by experimental or theoretical means. 10.2.2.2 Prior Information Let X(t) be the (unknown) target state at time t. We start the problem at time 0 and are interested in estimating X(t) for t ≥ 0. The prior information about the target is represented by a stochastic process {X(t); t ≥ 0}. Sample paths of this process correspond to possible target paths through the state space, S. The state space S has a measure associated with it. If S is discrete, this measure is a discrete measure. If S is continuous (e.g., if S is equal to the plane), this measure is represented by a density. The measure on S can be a mixture or product of discrete and continuous measures. Integration with respect to this measure will be indicated by ds. If the measure is discrete, then integration becomes summation. ©2001 CRC Press LLC 10.2.2.3 Sensors There is a set of sensors that report observations at an ordered, discrete sequence of (possibly random) times. These sensors may be of different types and report different information. The set can include radar, sonar, infrared, visual, and other types of sensors. The sensors may report only when they have a contact or on a regular basis. Observations from sensor j take values in the measurement space H. Each j sensor may have a different measurement space. The probability distribution of each sensor’s response conditioned on the value of the target state s is assumed to be known. This relationship is captured in the likelihood function for that sensor. The relationship between the sensor response and the target state s may be linear or nonlinear, and the probability distribution representing measurement error may be Gaussian or non-Gaussian. 10.2.2.4 Likelihood Functions Suppose that by time t observations have been obtained at the set of times 0 ≤ t ≤ … ≤ t ≤ t. To allow 1 K for the possibility that more than one sensor observation may be received at a given time, let Y be the k set of sensor observations received at time t . Let y denote a value of the random variable Y . Assume k k k that the likelihood function can be computed as ( ) { ( ) } L y s =Pr Y =y X t =s for s∈S (10.1) k k k k k The computation in Equation 10.1 can account for correlation among sensor responses. If the distribution of the set of sensor observations at time t is independent given target state, then L (y |s) is computed by k k k taking the product of the probability (density) functions for each observation. If they are correlated, then one must use the joint density function for the observations conditioned on target state to compute L (y |s). k k Let Y(t) = (Y,Y,…,Y ) and y = (y,…,y ). Define L(y|s,…, s ) = Pr {Y(t) = y|X(t) = s,…, X(t ) = s }. 1 2 K 1 K 1 K 1 1 K K Assume { } ( ) ( ) ( ) ( ) ( )   Pr Y t =yX u =s u , 0≤u≤t =Lys t ,…,s t  (10.2) 1 K Equation 10.2 means that the likelihood of the data Y(t) received through time t depends only on the target states at the times {t,…,t } and not on the whole target path. 1 K 10.2.2.5 Posterior Define q(s,…,s ) = Pr{X(t) = s,…,X(t ) = s } to be the prior probability (density) that the process 1 K 1 1 K K {X(t); t ≥ 0} passes through the states s,…,s at times t,…,t . Let p(t , s ) = Pr{X(t ) = s |Y(t ) = y}. 1 K 1 K K K K K K Note that the dependence of p on y has been suppressed. The function p(t , ·) is the posterior distribution K on X(t ) given Y(t ) = y. In mathematical terms, the problem is to compute this posterior distribution. K K Recall that from the point of view of Bayesian inference, the posterior distribution on target state represents our knowledge of the target state. All estimates of target state derive from this posterior. 10.2.3 Computing the Posterior Compute the posterior by the use of Bayes’ theorem as follows: { ( ) ( ) } ( ) Pr Y t =y and X t =s p t ,s = K { ( ) }K K k K Pr Y t =y K ∫L(ys,…,s )q(s,s ,…,s )dsds …ds (10.3) = ∫L(y1s,…,sK )q(1s,s2 ,…,sK )d1sds2 …dsK−1 1 K 1 2 K 1 2 K ©2001 CRC Press LLC Computing p(t , s ) can be quite difficult. The method of computation depends upon the functional K K forms of q and L. The two most common ways are batch computation and a recursive method. 10.2.3.1 Recursive Method Two additional assumptions about q and L permit recursive computation of p(t , s ). First, the stochastic K K process {X(t; t ≥ 0} must be Markovian on the state space S. Second, for i ≠ j, the distribution of Y(t) i must be independent of Y(t) given (X(t) = s,…,X(t ) = s ) so that j 1 1 K K ( ) ∏K ( ) L ys,…,s = L y s (10.4) 1 K k k k k=1 The assumption in Equation 10.4 means that the sensor responses (or observations) at time t depend k only on the target state at the time t . This is not automatically true. For example, if the target state space k is position only and the observation is a velocity measurement, this observation will depend on the target state over some time interval near t . The remedy in this case is to add velocity to the target state space. k There are other observations, such as failure of a sonar sensor to detect an underwater target over a period of time, for which the remedy is not so easy or obvious. This observation may depend on the whole past history of target positions and, perhaps, velocities. Define the transition function q (s |s ) = Pr {X(t) = s |X(t ) = s } for k ≥ 1, and let q be the k k k–1 k k k–1 k–1 0 probability (density) function for X(0). By the Markov assumption ( ) ∫∏K ( ) ( ) q s,…,s = q s s q s ds (10.5) 1 K k k k−1 0 0 0 S k=1 10.2.3.2 Single-Target Recursion Applying Equations 10.4 and 10.5 to 10.3 results in the basic recursion for single-target tracking given below. Basic Recursion for Single-Target Tracking ( ) ( ) Initialize Distribution: p t ,s =q s for s ∈S (10.6) 0 0 0 0 0 For k ≥ 1 and s ∈S, k ( ) ∫ ( ) ( ) Perform Motion Update: p– t ,s = q s s p t ,s ds (10.7) k k k k k−1 k−1 k−1 k−1 Compute Likelihood Function L from the observation Y = y k k k ( ) 1 ( ) ( ) Perform Information Update: p t ,s = L y s p– t ,s (10.8) k k C k k k k k The motion update in Equation 10.7 accounts for the transition of the target state from time t to k–1 t . Transitions can represent not only the physical motion of the target, but also changes in other state k variables. The information update in Equation 10.8 is accomplished by point-wise multiplication of p– (t, s) by the likelihood function L (y |s). Likelihood functions replace and generalize the notion of k k k k k contacts in this view of tracking as a Bayesian inference process. Likelihood functions can represent sensor information such as detections, no detections, Gaussian contacts, bearing observations, measured signal- to-noise ratios, and observed frequencies of a signal. Likelihood functions can represent and incorporate information in situations where the notion of a contact is not meaningful. Subjective information also ©2001 CRC Press LLC can be incorporated by using likelihood functions. Examples of likelihood functions are provided in Section 10.2.4. If there has been no observation at time t , then there is no information update, only a k motion update. The above recursion does not require the observations to be linear functions of the target state. It does not require the measurement errors or the probability distributions on target state to be Gaussian. Except in special circumstances, this recursion must be computed numerically. Today’s high-powered scientific workstations can compute and display tracking solutions for complex nonlinear trackers. To do this, discretize the state space and use a Markov chain model for target motion so that Equation 10.7 is computed through the use of discrete transition probabilities. The likelihood functions are also computed on the discrete state space. A numerical implementation of a discrete Bayesian tracker is described in Section 3.3 of Stone et al.3 10.2.4 Likelihood Functions The use of likelihood functions to represent information is at the heart of Bayesian tracking. In the classical view of tracking, contacts are obtained from sensors that provide estimates of (some components of) the target state at a given time with a specified measurement error. In the classic Kalman filter formulation, a measurement (contact) Y at time t satisfies the measurement equation k k ( ) Y =M X t +ε (10.9) k k k k where Y is an r-dimensional real column vector k X(t) is an l-dimensional real column vector k M is an r × l matrix k ε ~ N(0, Σ) k k Note that ~N(µ,Σ) means “has a Normal (Gaussian) distribution with mean µ and covariance Σ.” In this case, the measurement is a linear function of the target state and the measurement error is Gaussian. This can be expressed in terms of a likelihood function as follows. Let L (y|x) = Pr{Y = y|X(t) = x}. Then G k k L (yx)=(2π)−r2detΣ −12exp−1(y−M x)TΣ−1(y−M x) (10.10) G k  2 k k  Note that the measurement y is data that is known and fixed. The target state x is unknown and varies, so that the likelihood function is a function of the target state variable x. Equation 10.10 looks the same as a standard elliptical contact, or estimate of target state, expressed in the form of multivariate normal distribution, commonly used in Kalman filters. There is a difference, but it is obscured by the symmetrical positions of y and M x in the Gaussian density in Equation 10.10. A likelihood function does not represent k an estimate of the target state. It looks at the situation in reverse. For each value of target state x, it calculates the probability (density) of obtaining the measurement y given that the target is in state x. In most cases, likelihood functions are not probability (density) functions on the target state space. They need not integrate to one over the target state space. In fact, the likelihood function in Equation 10.10 is a probability density on the target state space only when Y is l-dimensional and M is an l × l matrix. k k Suppose one wants to incorporate into a Kalman filter information such as a bearing measurement, speed measurement, range estimate, or the fact that a sensor did or did not detect the target. Each of these is a nonlinear function of the normal Cartesian target state. Separately, a bearing measurement, speed measurement, and range estimate can be handled by forming linear approximations and assuming Gaussian measurement errors or by switching to special non-Cartesian coordinate systems in which the ©2001 CRC Press LLC 60 40 0 x 2 20 20 40 x 1 60 FIGURE 10.1 Detection likelihood function for a sensor at (70,0). measurements are linear and hopefully the measurement errors are Gaussian. In combining all this information into one tracker, the approximations and the use of disparate coordinate systems become more problematic and dubious. In contrast, the use of likelihood functions to incorporate all this information (and any other information that can be put into the form of a likelihood function) is quite straightforward, no matter how disparate the sensors or their measurement spaces. Section 10.2.4.1 provides a simple example of this process involving a line of bearing measurement and a detection. 10.2.4.1 Line of Bearing Plus Detection Likelihood Functions Suppose that there is a sensor located in the plane at (70,0) and that it has produced a detection. For this sensor the probability of detection is a function, P (r), of the range r from the sensor. Take the case d of an underwater sensor such as an array of acoustic hydrophones and a situation where the propagation conditions produce convergence zones of high detection performance that alternate with ranges of poor detection performance. The observation (measurement) in this case is Y = 1 for detection and 0 for no detection. The likelihood function for detection is L (1|x) = P (r(x)), where r(x) is the range from the d d state x to the sensor. Figure 10.1 shows the likelihood function for this observation. Suppose that, in addition to the detection, there is a bearing measurement of 135 degrees (measured counter-clockwise from the x axis) with a Gaussian measurement error having mean 0 and standard 1 deviation 15 degrees. Figure 10.2 shows the likelihood function for this observation. Notice that, although the measurement error is Gaussian in bearing, it does not produce a Gaussian likelihood function on the target state space. Furthermore, this likelihood function would integrate to infinity over the whole state space. The information from these two likelihood functions is combined by point-wise multiplica- tion. Figure 10.3 shows the likelihood function that results from this combination. 10.2.4.2 Combining Information Using Likelihood Functions Although the example of combining likelihood functions presented in Section 10.2.4.1 is simple, it illustrates the power of using likelihood functions to represent and combine information. A likelihood function converts the information in a measurement to a function on the target state space. Since all information is represented on the same state space, it can easily and correctly be combined, regardless of how disparate the sources of the information. The only limitation is the ability to compute the likelihood function corresponding to the measurement or the information to be incorporated. As an example, subjective information can often be put into the form of a likelihood function and incorporated into a tracker if desired. ©2001 CRC Press LLC 60 40 0 x 2 20 20 40 x 1 60 FIGURE 10.2 Bearing likelihood function for a sensor at (70,0). 60 40 0 x 2 20 20 40 x 1 60 FIGURE 10.3 Combined bearing and detection likelihood function. 10.3 Multiple-Target Tracking without Contacts or Association (Unified Tracking) In this section, the Bayesian tracking model for a single target is extended to multiple targets in a way that allows multiple-target tracking without calling contacts or performing data association. 10.3.1 Multiple-Target Motion Model In Section 10.2, the prior knowledge about the single target’s state and its motion through the target state space S were represented in terms of a stochastic process {X(t); t ≥ 0} where X(t) is the target state at time t. This motion model is now generalized to multiple targets. ©2001 CRC Press LLC Begin the multiple-target tracking problem at time t = 0. The total number of targets is unknown but bounded by N, which is known. We assume a known bound on the number of targets because it allows us to simplify the presentation and produces no restriction in practice. Designate a region, (cid:1), which defines the boundary of the tracking problem. Activity outside of (cid:1) has no importance. For example, we might be interested in targets having only a certain range of speeds or contained within a certain geographic region. Add an additional state φ to the target state space S. If a target is not in the region (cid:1), it is considered to be in state φ. Let S+ = S (cid:1) {φ} be the extended state space for a single target and S+ = S+ ×…× S+ be the joint target state space where the product is taken N times. 10.3.1.1 Multiple-Target Motion Process Prior knowledge about the targets and their “movements” through the state space S+ is expressed as a stochastic process X = {X(t); t ≥ 0}. Specifically, let X(t) = (X(t),…,X (t)) be the state of the system at 1 N time t where X (t) ∈S+ is the state of target n at time t. The term “state of the system” is used to mean n the joint state of all of the the targets. The value of the random variable X (t) indicates whether target n n is present in (cid:1) and, if so, in what state. The number of components of X(t) with states not equal to φ at time t gives the number of targets present in (cid:1) at time t. Assume that the stochastic process X is Markovian in the state space S+ and that the process has an associated transition function. Let q(s | s ) = k k k–1 Pr{X(t) = s |X(t ) = s } for k ≥ 1, and let q be the probability (density) function for X(0). By the k k k–1 k–1 0 Markov assumption { ( ) ( ) } ∫∏K ( ) ( ) Pr X t =s,…,X t =s = q s s q s ds (10.11) 1 1 K K k k k−1 0 0 0 k=1 The state space S+ of the Markov process X has a measure associated with it. If the process S+ is a discrete space Markov chain, then the measure is discrete and integration becomes summation. If the space is continuous, then functions such as transition functions become densities on S+ with respect to that measure. If S+ has both continuous and discrete components, then the measure will be the product or mixture of discrete and continuous measures. The symbol ds will be used to indicate integration with respect to the measure on S+, whether it is discrete or not. When the measure is discrete, the integrals become summations. Similarly, the notation Pr indicates either probability or probability density as appropriate. 10.3.2 Multiple-Target Likelihood Functions There is a set of sensors that report observations at a discrete sequence of possibly random times. These sensors may be of different types and may report different information. The sensors may report only when they have a contact or on a regular basis. Let Z(t, j) be an observation from sensor j at time t. Observations from sensor j take values in the measurement space H. Each sensor may have a different j measurement space. For each sensor j, assume that one can compute { } ( ) ( ) Pr Z t,j =zX t =s for z∈H and s∈S+ (10.12) j To compute the probabilities in Equation 10.12, one must know the distribution of the sensor response conditioned on the value of the state s. In contrast to Section 10.2, the likelihood functions in this section can depend on the joint state of all the targets. The relationship between the observation and the state s may be linear or nonlinear, and the probability distribution may be Gaussian or non-Gaussian. Suppose that by time t, observations have been obtained at the set of discrete times 0 ≤ t ≤ … ≤ t ≤ t. 1 K To allow for the possibility of receiving more than one sensor observation at a given time, let Y be the k ©2001 CRC Press LLC set of sensor observations received at time t . Let y denote a value of the random variable Y . Extend k k k Equation 10.12 to assume that the following computation can be made ( ) { ( ) } L y |s =Pr Y =y |X t =s for s∈S+ (10.13) k k k k k L (y |·) is called the likelihood function for the observation Y = y. The computation in Equation 10.13 k k k k can account for correlation among sensor responses if required. Let Y(t) = (Y, Y,…, Y ) and y = (y,…, y ). Define L(y|s,…, s ) = Pr {Y(t) = y|X(t) = s,…, X(t ) = 1 2 K 1 K 1 K 1 1 K s}. k In parallel with Section 10.2, assume that { } ( ( ) ( )) Pr Y(t)=y|X(u)=s(u), 0≤u≤t =L y|s t ,…,s t (10.14) 1 K and ( ) ∏K ( ) L y|s,…,s = L y |s (10.15) 1 K k k k k=1 Equation 10.14 assumes that the distribution of the sensor response at the times {t , k = 1,…, K} k depends only on the system states at those times. Equation 10.15 assumes independence of the sensor response distributions across the observation times. The effect of both assumptions is to assume that the sensor response at time t depends only on the system state at that time. k 10.3.3 Posterior Distribution For unified tracking, the tracking problem is equivalent to computing the posterior distribution on X(t) given Y(t). The posterior distribution of X(t) represents our knowledge of the number of targets present and their state at time t given Y(t). From this distribution point estimates can be computed, when appropriate, such as maximum a posteriori probability estimates or means. Define q(s,…, s ) = Pr{X(t) = 1 K 1 s,…, X(t ) = s } to be the prior probability (density) that the process X passes through the states s,…, 1 K K 1 s at times t,…,t . Let q be the probability (density) function for X(0). By the Markov assumption K 1 K 0 ( ) ∫∏K ( ) ( ) q s,…,s = q s |s q s ds (10.16) 1 K k k k−1 0 0 0 k=1 Let p(t, s) = Pr{X(t) = s|Y(t)}. The function p(t,·) gives the posterior distribution on X(t) given Y(t) By Bayes’ theorem, { ( ) ( ) } Pr Y t =y and X t =s p(t ,s )= K { ( ) }K K K K Pr Y t =y K ∫L(y|s,…,s )q(s,s ,…,s )ds…ds (10.17) = 1 K 1 2 K 1 K−1 ∫L(y|s,…,s )q(s,s ,…,s )ds…ds 1 K 1 2 K 1 K ©2001 CRC Press LLC

Description:
*This chapter is based on Bayesian Multiple Target Tracking, by Stone, L. D., Barlow, C. A., and Corwin, T. L.,. 1999. Artech House, Inc., Norwood, MA.
See more

The list of books you might like

Most books are stored in the elastic cloud where traffic is expensive. For this reason, we have a limit on daily download.