Understanding Information Based Processes in Traffic Modeling
(2017) FMA820 20171Mathematics (Faculty of Engineering)
 Abstract
 This thesis covers the automatic update of the model parameters in
traffic modelling using two mathematical tools, the Relative Entropy
Rate and the pathwise Fisher Information Matrix. The traffic model
used is based on a lattice of cells, representing the road, that can be
either occupied or empty that is updated with a kinetic Monte Carlo
algorithm. Because of fluctuating traffic density during a simulation
it becomes interesting to try to update the model parameters since
parameters that work well for dense traffic do not work as well for
sparse traffic. The two tools are used to identify model parameters that
are not optimal and to decide how to update these model parameters.  Popular Abstract
 Traffic modelling is useful in situations where we are trying to understand complicated traffic phenomena which can result in congested traffic. Similarly traffic modeling is important when planning to build a new road or when developing automated vehicles since we can explore a complicated situation in a computer instead of a road where lives can be endangered.
However for this to be possible it is necessary to have good models. Therefore a substantial amount of research is dedicated in this area to improve existing models or to find new, better ones. The work of this thesis has been focused on understanding one of these models as well as implementing (and understanding) two mathematical tools to improve this model.
One of the... (More)  Traffic modelling is useful in situations where we are trying to understand complicated traffic phenomena which can result in congested traffic. Similarly traffic modeling is important when planning to build a new road or when developing automated vehicles since we can explore a complicated situation in a computer instead of a road where lives can be endangered.
However for this to be possible it is necessary to have good models. Therefore a substantial amount of research is dedicated in this area to improve existing models or to find new, better ones. The work of this thesis has been focused on understanding one of these models as well as implementing (and understanding) two mathematical tools to improve this model.
One of the problems arising when modelling traffic is how to take into
account human behavior. Human drivers do not always make a logical
decision when choosing how to drive nor do they have a total awareness of
what is going on around them and they definitely do not have the awareness
of some kind of vehicle hive mind. A human driver cannot possibly see
how the traffic flow changes a couple of kilometers ahead or just around the
corner. This randomness can be hard to model since there is no pattern to
the human "errors".
Many models have tried to implement this with varying results. One
example of this is the cellular automaton model used in [2] where the randomness is entered by adding a probability of a vehicle slowing down each
time the system is updated. This randomness is very important for the
model, especially if there is periodic boundary conditions since otherwise
the output from the model might become periodic.
A common problem among these models is otherwise that the added
randomness is not really random which can result in a periodic randomness
being added to the model. An approach that does not have this problem
is to use a stochastic model based on a kinetic Monte Carlo algorithm as
proposed in [8, 9, 10]. The stochastic model divides the road into a lattice
where each cell has the width of a traffic lane and the length of a car plus a safety distance (in total this is seven meters). Before the simulation starts each cell is given a number, zero if the cell is empty or one if the cell is occupied (see Figure 11 in the thesis).
The next step is to calculate the rates. A vehicle have three ways to move,
left, right or forward, so for each vehicle three rates have to be calculated. If a vehicle cannot move into a cell because it is occupied or because there is no cell to move into (i.e. moving to the right when being in the rightmost lane) the rate corresponding to that move is zero. Once the rates are calculated they are 'stacked' so that the 'length' a move occupies corresponds to how large the rate for that move is (see Figure 12 in the thesis). Zero rates occupy zero length. Then a random number is picked and that number is then used to choose what move to make. Note that only one vehicle moves.
When that move has been made the algorithm starts over again but this
time only the rates that are affected by the move have to be recalculated. If
new vehicles are added at the beginning of the road the rates affected by the
new vehicles will have to be recalculated again (or calculated for the first
time for the new vehicles). An advantage of this model is that the move that
is picked is always possible to make so the model cannot get stuck on trying
to find the next possible move. The simulation can always move forward.
A disadvantage of this model and of other traffic models is that when
the traffic density changes so does the optimal model parameters. To counter
this problem two tools have been used, the Relative Entropy Rate (RER)
and the pathwise Fisher Information Matrix (pFIM). The main work of this
thesis have been to implement these tools to improve the traffic simulation.
Using these tools the parameters can be updated during a simulation so
that both dense and sparse traffic can be modeled without stopping the
simulation to manually update the parameters. The RER and the pFIM
use the simulated data from the model to identify which parameters are
sensitive to change and how to change these parameters to improve the
simulation. To prevent the parameters from becoming unrealistic, as an
example: cars do not travel at 1000 km/h, a lower and an upper bound are
inserted for each parameter.
The pFIM uses the calculated rates to find the sensitive parameters and
to find out if there are crosscorrelations between any of the parameters. If
there is a significant crosscorrelation between two parameters only one of
them can be updated since if one of them is updated the calculated pFIM
and RER results for the other parameter becomes invalid. An advantage of
the pFIM is that it only needs the rate that have already been calculated
and even though it needs gradients to find crosscorrelations and to identify
sensitive parameters those gradients are analytic and can be calculated before the simulation. Therefore it is quite cheap computation wise to use the pFIM.
Based on the results of the pFIM the RER is calculated for those parameters that will be updated. Calculating the RER is computationally costly so it is advantageous to use it in conjugation with the pFIM since then un
necessary computations can be avoided. What the RER does is that it uses
perturbed rates, i.e. 'What would the rates be if the simulation had given
the same results but with this value of parameter x instead?' and compares
them to the normal rates already calculated in the simulation. If the RER
is low it is good and if the RER is high it is bad.
By calculating the RER for both a positive and a negative perturbation
for all relevant parameters (if a parameter is relevant is decided by if the
pFIM says it should be updated) it is possible to update the parameters by
using the parameter values corresponding to the lowest RER.
The improvement of the model from using the RER and the pFIM can
be seen in Figures 13 and 14 in the thesis. The actual traffic is shown in blue and it comes from measurements on a road outside of San Fransisco. The simulated traffic based on the model is shown in red. In the left plot the parameters are not updated during the simulation and as can be seen it works for the first few minutes but then it does not really follow the change in traffic flow very well. In the right plot the parameters are updated during the simulation and the simulated traffic flow follows the real traffic flow quite well. There are still room for improvement but this shows that by updating the parameters during the simulation it is possible to obtain more accurate results. (Less)
Please use this url to cite or link to this publication:
http://lup.lub.lu.se/studentpapers/record/8918262
 author
 StövringNielsen, Linnea
 supervisor

 Alexandros Sopasakis ^{LU}
 organization
 course
 FMA820 20171
 year
 2017
 type
 H1  Master's Degree (One Year)
 subject
 keywords
 Traffic Modeling, Relative Entropy
 report number
 LUTFNA30402017
 language
 English
 id
 8918262
 date added to LUP
 20170622 13:06:28
 date last changed
 20170622 13:06:28
@misc{8918262, abstract = {This thesis covers the automatic update of the model parameters in traffic modelling using two mathematical tools, the Relative Entropy Rate and the pathwise Fisher Information Matrix. The traffic model used is based on a lattice of cells, representing the road, that can be either occupied or empty that is updated with a kinetic Monte Carlo algorithm. Because of fluctuating traffic density during a simulation it becomes interesting to try to update the model parameters since parameters that work well for dense traffic do not work as well for sparse traffic. The two tools are used to identify model parameters that are not optimal and to decide how to update these model parameters.}, author = {StövringNielsen, Linnea}, keyword = {Traffic Modeling,Relative Entropy}, language = {eng}, note = {Student Paper}, title = {Understanding Information Based Processes in Traffic Modeling}, year = {2017}, }