jueves, 7 de junio de 2018

Numerical Calculation

Numerical Calculation

A theme of this book is that Riemann sums are an effective means of analyzing random variability phenomena, enabling a comprehensive theory to be constructed. The traditional theory of probability is based on measurable sets formed by operations involving infinitely many steps. In contrast, Riemann sums have only a finite number of terms. On the face of it, such sums should be relatively easy to calculate. This chapter contains a number of such calculations, using Maple 15 computer software.
Numerical and empirical investigations warrant detailed study in their own right. But such an project is beyond the scope of this book. All that can be given are a few indications and pointers.
In preceding chapters Riemann sums have been used to calculate and analyze measurability of sets and functions, expectation values of random variables, state functions of diffusion systems and quantum mechanics, Feynman diagrams, valuation of share options, and strong and weak stochastic integrals.
In order to produce a robust theory, considerable subtlety has been built into the construction of Riemann sums. By making the Riemann sums conform to construction rules called gauges it has been possible to establish rules and criteria for sophisticated mathematical operations by which many classical results could be deduced, and also many new results which are beyond the reach of traditional theory.
But the essential simplicity of Riemann sums is a constant feature. Furthermore, in the various themes discussed in this book, the expressions encountered in the construction of Riemann sums were the familiar polynomial, exponential, trigonometric, and logarithmic functions. These are not expressions of an exotic or pathological kind which require very delicately constructed partitions.
In one dimension, integrals of polynomial functions can be estimated with Riemann sums of a straightforward kind, without having to resort either to the simple functions of Lebesgue theory, nor even to the δ-gauges of Riemann-complete theory. Similarly, good estimates of the Henstock integrals of the functions investigated in this book can often be obtained with regular partitions—or even binary partitions—of the infinite-dimensional domain of integration.
To sum up, the calculations involve finite Riemann sums. And in many cases the partitions involved are particularly amenable to numerical calculation. So it should not be a surprise that many of the steps involved in numerical calculation of the themes of this book can be illustrated with Maple software.

9.1   Introduction

This chapter presents Maple estimates of some of the calculations encountered in Chapters 7 and 8. With Image = ]0, t] and
Image
consider deterministic calculations f performed on the unpredictable or random elements xs which, for Image, are the potential joint outcomes of an experiment or joint observation denoted by X—that is,
Image
where, for each sXs denotes the measurement or process of determination of the individual datum xs.For any potential joint outcome x, the deterministic calculation f(Image) is subject to the random variability of each xs for Image. The random variability in the final outcome f(Image), induced by the joint random variability of xs (Image), is measured by a joint potentiality distribution function FX = Image, defined on the events I = I[N] for Image.
For real-valued f, a potential final outcome f(Image) can be regarded as a potential value y Image R. Then y is the unpredictable outcome of an observation Y. And, provided f is measurable and FX is continuous, the likelihoods of values y are measured by a potentiality distribution function FY defined for cells J ImageI(R), with
Image
This is the elementary form Y ≃ y[RFY] of the contingent joint-basic observable f(Image), as described in Sections 5.2 and 5.3.
We are primarily concerned with Brownian motion X, and, except where otherwise stated, will take FXto be the Brownian distribution function G.
A key point in these illustrations is the representation of the elementary-form observables Y by means of histograms. Using Maple, we will, for particular calculations f on values in Image, demonstrate with histograms both the possible values y of Y and also the distribution function FY (depending on FX and f) which measure the likelihoods in R.
Histograms are a useful way of visualizing elementary observables Y ≃ y[RFY], because they exhibit the range of potential values y Image R of the observable, and they indicate the values FY(J) of cells J ⊂ R. Constructed, as they are, on a finite number of cells J partitioning the domain R, histograms are particularly apt for representing observables in the Riemann sum frame work of this book. Histograms capture and display the distinguishing features of elementary observables.1
Of particular interest are the various kinds of calculation f discussed in Chapter 8. These calculations involve
  • selection of partition points N = {..., s, s′,...} in Image;
  • for any potential outcome Image, forming terms composed of differences or increments such as
    Image
  • calculating Riemann sums on domain Image using these terms.
Such calculation depends not just on the potential joint occurrence Image, but also on elements N ⊂ Image, often in the form of differences sj − sj−1. This is the joint-contingent view, in contrast to the elementary view, and developing the theory more fully in this direction requires ideas from Section A.l of the Epilogue.
But if the Riemann sums converge for each Image, then the limit does not depend on any particular N ⊂ Image, and when the limit of the Riemann sums is taken, the outcome can be treated as a potential datum of a strong stochastic integral—a well defined elementary observable. The features of such a calculation are displayed in the histogram of Figure 9.4.
When the Riemann sums do not converge, and in default of a suitable theory involving joint-contingent observables with variable N, the “weak stochastic integral” device is available. To visualize an observable defined by this method, take several successive partitions, each one refining the previous one. Then, for each such partition, perform the Riemann sum calculation using a sample of potential joint outcomes Image, and draw a histogram of the resulting real numbers. By examining the histograms for successive partitions it is possible get a visual sense of the range of possible values y of the weak stochastic integral in each case, along with the “shape” of the corresponding likelihood values FY(J); and also a sense of how these elements change when the partitions are refined successively.
Figures 9.69.7, and 9.8 illustrate this for Image, where X is standard Brownian motion. Maple allows us to create samples of thousands of Riemann sums, and the histogram of any such sample constitutes a visualization of the particular elementary observable Y ≃ y[RFY] whose potential datum is
Image
These diagrams demonstrate what is meant by “weak stochastic integral”, in which the Riemann sums do not generally converge for particular outcomes y = f(Image). The same device is applied to Itô’s formula in Section 9.5.

9.2   Random Walk

As an introduction, random walk and Brownian path diagrams can easily be produced with Maple.
The following Maple code simulates a random walk whose steps or increments are normally distributed with mean zero, standard deviation 0.1, and variance 0.12 = 0.01. Provided the increments are independent, Theorem 100 says that the variance of the sum of the increments equals the sum of the variances of the increments. Therefore the unit interval Image = ]0,1] is partitioned by 100 equal intervals of length 0.01.
Calculation 1
restart:
with(Statistics):
randomize():
BrownianIncrements := Sample(Normal(0, 0.1), 100):
ListBrownianIncrements := [seq(BrownianIncrements[j], j=1..100)]:
ordinate[0] := 0:
for j from 1 to 100 do
   ordinate[j] := ordinate[j − 1] + ListBrownianIncrements[j]:
end do:
for j to 100 do
   abscissa := [seq(0.01*j, j=1..100)]:
end do:
RandomWalk := [[0, 0], seq([abscissa[j], ordinate[j]], j=1..100)]:
plot(RandomWalk);
Line 1 of Calculation 1 cancels any preceding Maple code. Line 2 activates Maple’s statistics functions, such as Sample. Line 3 resets the random number generation algorithm of the program. Line 4 produces a random sample of 100 numbers, each of them sampled from a normal distribution with mean zero, standard deviation 0.1, and variance 0.12 = 0.01. Line 5 converts these 100 items into a Maple list structure, indexed from 1 to 100. Line 6 assigns 0 as the ordinate (vertical) of the zeroth item. Lines 7 to 9 assign the sum of the first j Brownian increments as the jth ordinate, for j from 1 to 100. Lines 10 to 12 assign the value 0.01 j as the jth abscissa (horizontal coordinate), for j from 1 to 100. Line 13 assembles 101 abscissa–ordinate pairs into a Maple indexed-list structure. Line 14 plots the 101 points of this list, joining each consecutive pair of points with a straight line segment.
Image
Figure 9.1: Random walk with normal increments.
In Figure 9.1, the points are the important thing. The straight line segments joining up the points have no particular significance, but they are helpful in drawing the eye from one point to the next.
For j = 0 to 100, the jth point of the graph is (Sjx(Sj)), where s = 0.01j and xs is the sum of the first jrandom normal increments; that is,
x(Sj) = xSj = ordinate [j].
Thus x1 = x(s100) = x(1) = ordinate [100]. One particular execution of Calculation 1 produced
x1 = 0.49372516204936334.
If the Maple program is re-run, 100 random increments produce a different random walk, and a different datum x1. Here is an example:
x1 = −2.268719956985433.
Calculation 4 calculates a sample of 1000 values of x(1). The histogram confirms that, subject to the limitations of the sampling process (which is not truly random), the data values produced by this program are from a normal distribution with mean zero and standard deviation 1.
The following program has 10,000 increments, each with standard deviation 0.01, so their variance is 0.0001; giving us a partition of domain Image = ]0,1] consisting of 10,000 intervals or steps. The graph of the sample path conforms to the customary representation of Brownian motion.
Calculation 2
restart:
with(Statistics):
randomize():
BrownianIncrements := Sample(Normal(0, 0.01), 10000):
ListBrownianIncrements := [seq(BrownianIncrements[j], j=1..10000)]:
ordinate[0] := 0 :
for j from 1 to 10000 do
   ordinate[j] := ordinate[j − 1] + ListBrownianIncrements[j]:
end do:
for j from 1 to 10000 do:
   abscissa := [seq(0.0001*j, j=1..10000)]:
end do:
BrownianPath := [[0, 0], seq([abscissa[j], ordinate[j]], j=1..10000)]:
plot(BrownianPath);
Image
Figure 9.2: Sample path of Brownian motion.
Figure 9.2 has the familiar2 jagged path shape. By using a bigger sample it is possible to produce a graph that looks less like the random walk of Figure 9.1, and more like the traditional picture of standard Brownian motion.
The graph consists of points joined by straight line segments, just like Figure 9.1. The points have both mathematical and physical significance. The straight line segments joining the points have neither. The only substantial difference between Figure 9.1 and Figure 9.2 is that the latter has more sample points.
With Image = ]0,1], each of Figure 9.1 and Figure 9.2 is a sample datum of a Riemann sum estimate of the strong stochastic integral
Image
In each case, the partition of Image is regular (since each cell has the same length), though not binary. The Riemann sum observable in Figure 9.1 is
Image
where tj = j10−2, and Image = ](j − 1)10−2j10−2].
The observables Image and Image have joint-contingent representation
Image
respectively, where G is the standard Brownian distribution function. The representation of each of them in elementary form is particularly easy to establish, because of the “telescopic” cancellation of terms in the Riemann sum (in every Riemann sum, in fact). Cancellation gives
Image
and
Image
The latter has elementary form. Then, writing y = x1, we have
X1 = Y,     Y ≃ y[R,N];
where N is the standard normal distribution function N0,1 with mean 0 and standard deviation 1.
These calculations are confirmed by applying Theorems 159 and 160 to the observable Image. They are also confirmed in Maple Calculation 4, and in the larger sample of 10,000 Riemann sums of 100 terms each.

9.3   Calculation of Strong Stochastic Integrals

In Calculation 3 Maple’s Sample command is used to select 100,000 random values from a normal distribution with mean zero and standard deviation 0.1. So the variance is 0.01. In Calculation 4 these are taken in groups of 100 values at a time, giving 1000 samples of size 100. If the 100 sampled values were really instances of 100 independent basic normal observables (which, strictly speaking, cannot necessarily be guaranteed by computer software) then the variance of the sum of each group of 100 values would be 100 × 0.01, =1. The following examines the output of 100,000 standard normal increments.
Calculation 3
restart :
with(Statistics):
BrownianIncrements := Sample(Normal(0, 0.1), 100000):
dx := [seq(BrownianIncrements[i], i=1..100000)]:
Histogram(dx);
Line 1 of Calculation 3 gets rid of the stored results of any previous Maple calculations. Line 2 activates the Maple Statistics functions. Line 3 generates a random sample of 100,000 values of a normal distribution with mean zero and standard deviation 0.1. Line 4 converts this output into a Maple list whose individual elements are indexed from 1 to 100,000. Line 5 produces a histogram of these values, displayed in Figure 9.3.
The theory of random variation can be thought of as a theory of estimation or approximation. Given a single datum (estimate or approximate value), it is not necessarily obvious what it is an estimate of; nor what is the “true value” to which the datum approximates. But if, as in Calculation 3, a great number of estimates can be produced, it may be possible to deduce the “true value”. Calculation 3 provides 100,000 estimates. To see these numbers, just change the colons to semicolons in the Maple code. The program will then display a list of 100,000 numbers—something it may be preferable to avoid.
What is actually needed is not such a list, but a “sense”, or summary, of these numbers as estimates of a “true value”, and that is provided by the histogram, which indicates that:
Image
Figure 9.3: Histogram for 1000 samples, Riemann sum step 0.01.
  • the estimates cluster around the value 0; or
  • the likelihood that an estimate will be close to zero is much greater than the likelihood it will not be close to zero.
In other words, the histogram is a summary or picture of the observable or random variable for which the numbers produced by Calculation 3 are data.
Inspection of the histogram suggests that the mean value of the sample is zero. Most of the values lie within the range −0.35 to +0.35, suggesting a standard deviation of 0.1. This is confirmed by applying Maple functions to the sample data generated by Calculation 3 :
Mean(dx);
For one particular sample this gave result −0.0003180143098.
StandardDeviation(dx);
This gave output 0.0995857853503455, close to 0.1. Combined with Figure 9.3, this gives some broad confirmation that, for a large sample (100,000), Maple’s random normal increment generator can be relied on, within reason. While statistical independence of the data generated by the Sample command cannot be guaranteed, this histogram has the appearance of a normally distributed observable with the appropriate mean and standard deviation, allowing us to proceed with due caution.
Calculation 4 below selects 100 consecutive values at a time from the list of 100,000 values produced by Calculation 3, giving 1000 groups of 100 values. The 100 values in each group are taken to be independent Brownian increments. These increments are added in each of the 1000 groups, so the total of the increments in each group can be regarded as:
  • the outcome of a random walk of 100 steps, or
  • a Riemann sum estimate Image of the stochastic integral Image (or Image of the integrand g1 = dxs on the domain Image = ]0,t], with t = 1; the Riemann sum consisting of 100 terms.
The Maple code produces a sample of 1000 of the potential values of the observable Y ≃ y[RFY] where each sample value y is obtained by calculating the Riemann sum Image
Image
Thus the joint-contingent and elementary forms are, respectively,
Image
Since the strong stochastic integral of integrand g1 exists, with value equal to Image for every partition of Image, this gives
Image
When Maple produces the histogram Figure 9.4 of the list of 1000 values of y, it can be expected to confirm the theoretical prediction of Example 59—that occurrences y are sample data of a normally distributed observable with mean 0 and standard deviation 1.
From the latter point of view, the Maple code in Calculation 4 produces Riemann sum estimates of 1000 “independent” instances of the strong stochastic integral
Image
The elementary form of this reduces simply to Y ≃ y[RN0,1], a normal random variable with mean zero and standard deviation
Image
The histogram of Figure 9.4 provides some empirical confirmation of this, but perhaps not very convincing in visual terms.
Calculation 4
for r from 1 to 1000 do
   k := 100*r:
   j := k − 99:
   x1[r] := add(dx[i], i =j..k):
end do:
X1 := [seq(x1[r], r=1..1000)]:
Histogram(X1);
Provided Calculation 4 is run immediately after Calculation 3, the data from the latter feeds into the former. Otherwise Calculation 3 must be re-run. The output of Calculation 4 is the histogram in Figure 9.4.
To check the parameters of this histogram, the command Mean(X1) gave a result −0.03180143098 from a Maple sample, while StandardDeviation(X1) gave 1.00075019279754. So the mean of a sample of 1000 Riemann sums came to approximately zero, and their standard deviation was approximately 1; while their histogram is at least vaguely reminiscent of the shape of a normal distribution.
Image
Figure 9.4: Histogram for 1000 sums of 100 normal increments.
The cancellation or “telescoping” argument of Section 8.3 establishes that Brownian variability of each joint-data component xs of the contingent observable Image has no effect on the variability of Xt. In other words, Image simply replicates Xt. This is suggested (even if only faintly) in the histogram of Figure 9.4.
The fact that Image exists as a strong stochastic integral, equal to its Riemann sum estimates regardless of which partitions of ]0,1] are chosen, makes the preceding remarks unsurprising. It is obvious that the distribution function FX1 of the elementary observable is Image, with t = 1 in this case. So the histogram above provides partial confirmation of the obvious.
The differences in shape between the histograms of Figures 9.4 and 9.3 suggest that data generated by Maple should be treated with caution. Randomness and independence are ideal mathematical concepts, and perfect manifestations of them cannot be expected to appear in practice.
In order to get a better simulation, amend the code in Calculation 3 to produce 1,000,000 Brownian increments instead of 100,000, and then amend Calculation 4 to deliver 10,000 (instead of 1000) Riemann sums of 100 terms each. The latter produces the histogram of the Riemann sum values in Figure 9.5. This histogram more plausibly conforms to the theoretical result; that if Image is standard Brownian motion, then, for t = 1, the elementary form of the strong stochastic integral Image has standard normal distribution function N0,1, with mean 0 and standard deviation 1.
Quite large samples are used here as a practical means of compensating for any lack of randomness and independence in Maple-generated sample data. There is a penalty for this, in that the running time of the programs can be relatively long.

9.4   Calculation of Weak Stochastic Integrals

With Image = ]0,1] this section investigates numerically the weak stochastic integral Image, for which the Riemann sum estimates do not generally converge whenever the partitions of the domain ]0,1] are successively refined.
Section 8.4 based the meaning of weak stochastic integrals on rth binary partitions of ]0,1] and the corresponding rth binary Riemann sum contingent observables given by the sum of the squares of the rth binary Brownian increments.
For any given r the rth binary Riemann sum contingent observable has elementary form given by
Image
The potential data-values Image are unbounded above, and have zero as lower bound. This holds for all r. Therefore, unlike Image, none of the observables Image is normally distributed. (The potential data-values in a normal distribution are unbounded above and below.)
Image
Figure 9.5: Histogram for 10,000 sums of 100 normal increments.
Loosely speaking, the idea behind the weak stochastic integral is as follows. As r → ∞ the rth binary Riemann sums may converge for some, but not all, of the potential joint outcomes Image of the Brownian motion Image. But when the set of rth binary Riemann sums is formed for each Image, the Maple calculations below show that, for integrand Image, these sets tend to cluster ever more closely around some target value as r increases to infinity. This “target value” is the weak stochastic integral of g2 on Image.
The test for weak convergence of the rth binary Riemann sums of any stochastic integrand g(XS) is as follows.
  • Find the “target value”; that is, the observable k(Image) which is to be candidate for the weak stochastic integral of g(XS). (In the case of Image, the candidate is the constant 1.)
  • For each r calculate the difference kr(Image) between rth binary Riemann sum of g(XS) and k(Image):
    Image
  • Determine whether Image → 0 as r → ∞. If it does, then k(Image) is the weak stochastic integral of Image on Image.
The function Image is an outer measure—a kind of volume—defined on subsets S of Image. If Sr is the set Image, then continuity of G implies
Image
provided kr is also continuous; so, under these conditions, Image → 0 implies that the “volume” Image → 0. The aim of this section is to produce numbers and diagrams which illustrate this.
The distribution function of the contingent rth binary Riemann observable Image is the Brownian distribution function G. What about Image, the distribution function of the elementary form Image, with z(r) = Image
Denote the inverse function of Image by Q. Provided g is G-measurable, Theorem 227 then gives
Image
If the stochastic integrand g is g1 = X(Image) = dXs, then, for every rImage is simply the standard normal distribution function N.
Other stochastic integrands g may be dealt with on a case by case basis, as in Chapter 8. But without striving3 to find analytical form for distribution functions Image, numerical calculations can provide a sense of them.
In the calculations below it is more convenient to use decimal rather than rth binary partitions. The Maple code in Calculation 5 is intended to give 1000 Riemann sum estimates, each containing 25 terms, of the weak stochastic integral of Image on ]0,1].
Calculation 5 selects 25,000 random Brownian increments, each having standard deviation 0.2 and variance 0.04. With t = 1 this is equivalent to partitioning the domain Image = ]0, t] into 25 equal steps of length 0.04. In each case Maple calculates the Riemann sum of integrand
Image
so the sample of 25,000 increments generates a sample of 1000 Riemann sums, each containing 25 terms. (A Riemann sum containing 25 terms is not binary—the closest binary partition has r = 5, giving a 25 = 32 term Riemann sum. Either way, the Riemann sum calculation gives an idea of how the resulting values are distributed.) Maple then constructs the histogram, Figure 9.6, of the 25,000 Riemann sum values.
Calculation 5
restart:
with(Statistics):
BrownianIncrements := Sample(Normal(0, 0.2), 25000):
dx := [seq(BrownianIncrements[i], i=1..25000)]:
for r from 1 to 1000 do
   k := 25*r:
   j := k − 24:
   RiemannSum[r] := add(dx[i]2, i=j..k) :
end do:
RiemannSums := [seq(RiemannSum[r], r=1..1000)] :
Histogram(RiemannSums);
This histogram in Figure 9.6 gives a sense of the elementary form Z(r) of
Image
Figure 9.6: Histogram of 1000 25-term Riemann sums Image.
the contingent observable Image which are, respectively,
Image
with r = 5. (Taking r = 5 requires partitioning Image into 32 parts, while, for convenience, the partitioning in Calculation 5 and Figure 9.6 involves only 25 terms.)
The observable Z(r) does not appear to be normally distributed. If the Maple command
Mean(RiemannSums);
is now executed, then, depending on the particular Riemann sum values produced by the random sample or simulation of Calculation 5, a result such as 0.9930280578 is obtained. Since this is close to 1, it may constitute some kind of confirmation that (9.1) converges in some sense to 1 for this particular stochastic integrand.
The values of dx[i], given by the random sample in Calculation 5, are normally distributed Brownian increments. Subject to Maple programming constraints, it is hoped that they are also independent. Therefore the elementary-form distribution function Image indicated in Figure 9.6 corresponds to the distribution function Image given by (9.2). In other words, Z(r) “inherits”, in some sense, its distribution function FZr in domain R, from the standard Brownian distribution function G of the standard Brownian joint-basic process Image in Image.
The following calculations use successively finer partitions of Image = ]0,1] in order to see the shape of the distribution functions Image as r increases. For convenience, binary partitions are avoided.
Calculation 6
restart:
with(Statistics):
BrownianIncrements := Sample(Normal(0, 0.1), 100000):
dx := [seq(BrownianIncrements[i], i=1..100000)]:
for r from 1 to 1000 do
   k := 100*r:
   j := k − 99:
   RiemannSum[r] := add(dx[i]2, i=j..k) :
end do:
RiemannSums := [seq(RiemannSum[r], r=1..1000)]:
Histogram(RiemannSums);
Calculation 6 produces the histogram in Figure 9.7. It indicates a clustering of Riemann sum values around a “target” value 1. The range of Riemann sum values obtained in this case is less than the range demonstrated in Figure 9.6 for Calculation 5, indicating “tighter” clustering as r increases.
Calculation 7 has 1000 Riemann sums Image, each containing 400 squares of Brownian increments dxs, each of which in turn has mean zero with standard deviation 0.05 and variance 0.0025; corresponding to random walks of 400 steps in the domain ]0,1].
Image
Figure 9.7: Histogram of 1000 100-Riemann sums Image.
Image
Figure 9.8: Histogram of 1000 400-term Riemann sums Image.
Calculation 7
restart:
with(Statistics):
BrownianIncrements := Sample(Normal(0, 0.05), 400000):
dx := [seq(BrownianIncrements[i], i=1..400000)]:
for r from 1 to 1000 do
   k := 400*r:
   j := k − 399:
   RiemannSum[r] := add(dx[i]2, i=j..k) :
end do:
RiemannSums := [seq(RiemannSum[r], r=1..1000)] :
Histogram(RiemannSums);
The output from Calculation 7 is the histogram in Figure 9.8.
The three histograms in Figures 9.69.7, and 9.8 can be seen to provide some empirical support for (9.1) converging to zero as r → ∞. They provide some confirmation that the integrand g = g2 = dX2 has (weak) stochastic integral f(Image) = 1.
To test further for weak convergence of the Riemann sums to 1, change the preceding Maple calculations to calculate
Image
instead of RiemannSum.
restart:
with(Statistics):
BrownianIncrements := Sample(Normal(0, 0.2), 25000):
dx := [seq(BrownianIncrements[i], i=1..25000)]:
for r from 1 to 1000 do
   k := 25*r:
   j := k − 24:
   VarSum[r] := abs(add(dx[i]2, i=j..k ) - 1):
end do:
VarSums := [seq(VarSum[r], r=1..1000)]:
Histogram(VarSums);
Mean(VarSums);
Calculation 9
restart:
with(Statistics):
BrownianIncrements := Sample(Normal(0, 0.1), 100000):
dx := [seq(BrownianIncrements[i], i=1..100000)]:
for r from 1 to 1000 do
   k := 100*r:
   j := k − 99:
   VarSum[r] := abs(add(dx[i]2, i=j..k − 1):
end do:
VarSums := [seq(VarSum[r], r=1..1000)] :
Histogram(VarSums);
Mean(VarSums);
Image
Figure 9.9: Histogram for 25-term Riemann sum observable with Image = 0.04.
Calculation 10
restart:
with(Statistics) :
BrownianIncrements := Sample(Normal(0, 0.05), 400000):
dx : = [seq(BrownianIncrements[i], i=1..400000)] :
for r from 1 to 1000 do
   k := 400*r:
   j := k − 399:
   VarSum[r] := abs(add(dx[i]2, i=j..k) − 1)
end do:
VarSums := [seq(VarSum[r], r=1..1000)]:
Histogram(VarSums);
Mean(VarSums);
Each of these three (r = 1,2,3) Maple calculations returns two pieces of output. These are, in each case, a histogram of the rth sample, and the mean of the rth sample. For r = 1,2,3, the rth sample consists of 1000 data values
Image
Figure 9.10: Histogram for 100-term Riemann sum observable with Image = 0.01.
Image
Figure 9.11: Histogram for 400-term Riemann sum observable, Image = 0.0025.
Image
say. (For convenience of calculation, the partitions of Image for the Riemann sum are not rth binary.) The rth sample can be thought of as consisting of the 1000 data values returned by 1000 observables:
Image
being the corresponding elementary and contingent forms.
The terms dx[i] in the Riemann sums are selected by Maple as independent random Brownian increments; so the sample of contingent data elements hr(Image) are distributed in accordance with the Brownian distribution function G and the deterministic function hr. In other words, the elementary data-values y(r) = hr(Image) inherit from G and hr a distribution function Image in R. Therefore, using (9.2), for any J Image I(R) the proportion of sample data-values y in J is approximately
Image
and, using Theorems 39 and 79,
Image
For r = 1,2,3 the histograms are displayed in Figures 9.99.10, and 9.11, respectively.
Each time Calculations 89, and 10 are executed a particular random sample of Brownian increments dx[i] is generated. For a trio of such samples, the three estimates of Image returned by
Mean(VarSums);
were
0.2180014861,   0.1101283834,   0.05825507338,
for partitions of Image using cells of length
0.04,   0.01,   0.0025,
respectively. Though only three terms of the series are examined, the results provide some intuitive support for the statement that Image → 0 as r → ∞.

9.5   Calculation of Itô’s Formula

For σ Image RImage = ]0,i], and Image, Itô’s formula states that
Image
the final integral being weak stochastic. This can be written as
Image
The “=” sign in Itô’s formula amounts to a statement about the value of the weak stochastic integral on the right-hand side and is therefore a statement about weak convergence of Riemann sums.
In Section 8.12 the formula was verified for the function
Image
For this particular function f, the objective here is to demonstrate numerically and visually the meaning of the weak convergence of Riemann sum estimates of the observables in Itô’s formula. To simplify the Maple calculations take μ = 0.
As before, the calculations are done by constructing large samples of the potential values of the Riemann sum observables and drawing histograms of these samples. The process is repeated, using finer partitions in each case, and the resulting histograms illustrate successively closer clustering of the sample Riemann sum values around a weak limit.
The intention is to provide some numerical and visual sense of what is involved in Itô’s formula, as expressed in (8.36). That is,
Image
where, with integrands from (9.3),
Image
The following Maple code is used to estimate Image.
Calculation 11
restart;
with(Statistics);
BrownianIncrements := Sample(Normal(0, 10), 10000);
dx := [seq(BrownianIncrements[i], i=1..10000)];
for q from 1 to 1000 do
   k := 10*q:
   j := k − 9:
   s[j − 1] := 0:
   xs[j − 1] := 0:
   RS[j − 1] := 0:
   for i from j to k do
        s[i] := 100*i:
        xs[i] := add(dx[q], q =j..i):
        hl[i] := 10*exp(10*xs[i] − (1/2*100)*s[i]
                    − 10*exp(10*xs[i − 1] − (1/2*100)*s[i − 1]) :
        h2[i] := 10*exp(10*xs[i − 1]
                    − (1/2*100)*s[i − 1]) * (xs [i] − xs[i − 1]):
        RS[i] := RS[i − 1] + abs(hl[i] − h2[i]):
   end do:
   IT0[q] := RS[k]:
end do:
Ito := [seq(IT0[q], q=1..1000)]:
Histogram(Ito);
Mean(Ito);
This simulation uses standard Brownian increments dx = xs′ − xs where
Image
Sample (Normal (0, 10), 10000) selects a random sample of 10,000 “independent” normal increments. The domain Image = ]0,1000] is partitioned into 10 equal intervals of length σ2 = 100, with partition points Si = s[i]. The process value x(si) = xs[i] is obtained by adding up i normal increments dx. The values of h1 and h2 at partition point Si are given by hi[i] and h2[i] in accordance with (9.4). Calculation 11 has r = 10 (so the partition is not binary). For any k, 1 ≤ k ≤ 1000, the kth sample Riemann sum value RS[k] of Image is given by
Image
which the Maple code finds by accumulating, in successive cycles, the absolute value of terms (hi [i]− h2 [i]) in the r-partition:
RS[i] := RS[i − 1] + abs(hl[i] − h2[i]).
A sample of 1000 of these Riemann sum estimates of Image is listed in the Maple list denoted by Ito.
The histogram (Figure 9.12) of the list Ito gives an impression of the Riemann sum observable for this instance of Itô’s formula.
As explained in Section 9.4, the Mean(Ito) command provides a Riemann sum estimate of Imagefor a particular integer r. According to the theory,
Image
as r → ∞. With r = 10 in Image = ]0,1000], a sample of 1000 Riemann sum estimates of Image produced for Calculation 11 returned an estimated value Mean(Ito) = 76.86002866.
To see what happens when Image = ]0,1000] is partitioned with smaller subintervals, Calculation 12 selects 1,000,000 Brownian increments with σ = σ2 = 1. A sample of 1000 Riemann sums is formed from partitions of Image = ]0,1000] consisting of 1000 equal intervals of length 1.
Calculation 12
restart;
with(Statistics);
BrownianIncrements := Sample(Normal(0, 1), 1000000);
dx := [seq(BrownianIncrements[i], i = 1..1000000)] ;
for q from 1 to 1000 do
   k := 1000*q:
   j := k − 999:
   s[j − 1] := 0:
   xs[j − 1] := 0:
   RS[j − 1] := 0:
   for i from j to k do
         s[i] := 100*i:
         xs[i] := add(dx[q], q=j..i):
         h1[i] := 1*exp(1*xs[i] − (Image*1)*s[i]
                     − 1*exp(1*xs[i − 1] − (Image*1)*s[i − 1]):
         h2[i] := 1*exp(1*xs[i − 1]
                     − (Image*1)*s[i − 1])*(xs[i] − xs[i − 1]):
         RS[i] := RS[i − 1] + abs(h1[i] − h2[i]):
   end do:
   IT0[q] := RS[k]:
end do:
Ito := [seq(IT0[q], q-1..1000)] :
Histogram(Ito);
Mean(Ito);
The histogram representation of the elementary form of the Riemann sum observable is in Figure 9.13Mean(Ito) returned 1.152977886 as estimate of Image.
The following two calculations are done on domain Image = ]0,1]. Calculation 13 has σ = 0.2, so Image = ]0,1] is partitioned into 1/0.04 = 25 subintervals; Maple simulates 1000 Riemann sum estimates for (9.4), with histogram output in Figure 9.14, and Mean(Ito) estimate 0.2185890853 for VhrG[RT].
Calculation 13
restart;
with(Statistics);
BrownianIncrements := Sample(Normal(0, 0.2), 25000);
dx := [seq(BrownianIncrements[i], i=1..25000)];
for q from 1 to 1000 do
   k := 25*q:
   j := k − 24:
   s[j − 1] := 0:
   xs[j − 1] := 0:
   RS[j − 1] := 0:
   for i from j to k do
         s[i] := 0.04*i:
         xs[i] : = add(dx[q], q=j..i) :
         h1[i ] := 0.2*exp(0.2*xs[i] − (Image*0.04)*s[i]
                      − 0.2*exp(0.2*xs[i − 1] − (Image*0.04)*s[i − 1]) :
         h2[i] := 0.2*exp(0.2*xs[i − 1]
                      − (Image*0.04)*s[i − 1])*(xs[i] − xs[ i − 1]) :
         RS[i] := RS[i − 1] + abs(h1[i] − h2[i]) :
   end do:
   ITO[q] := RS[k]:
end do:
Ito := [seq(IT0[q], q=1..1000)]:
Histogram(Ito);
Mean(Ito);
Image
Figure 9.12: Histogram for Itô formula, 10-term approximation, Image = ]0,1000].
Image
Figure 9.13: Itô formula, 1000-term approximation, Image =]0,1000].
Calculation 14 has σ = 0.05, so Image = ]0,1] is partitioned into 1/0.0025 = 400 subintervals, and Maple simulates 1000 Riemann sum estimates for (9.4), with histogram output in Figure 9.15, and Mean(Ito)estimate 0.1893459552 for VhrG [RT].
Calculation 14
restart;
with(Statistics);
BrownianIncrements := Sample(Normal(0, 0.05), 400000);
dx := [seq(BrownianIncrements[i], i=1..400000)] ;
for q from 1 to 1000 do
   k := 400*q:
   j := k − 399:
   s[j − 1] := 0:
   xs[j − 1] := 0:
   RS[j − 1] := 0:
   for i from j to k do
         s[i] := 100*i:
         xs[i] := add(dx[q], q=j..i) :
         h1[i] := 0.05*exp(0.05*xs[i] − (Image*0.0025)*s[i]
                     − 0.05*exp(0.05*xs[i − 1] − (Image*0.0025)*s[i − 1] ) :
         h2[i] := 0.05*exp(0.05*xs[i − 1]
                     − (Image*0.0025)*s[i − 1])*(xs[i] − xs[i − 1]) :
         RS[i] := RS[i − 1] + abs(h1[i] − h2[i]) :
   end do:
   IT0[q] := RS[k]:
end do:
Ito := [seq(IT0[q], q=1..1000)]:
Histogram(Ito);
Mean(Ito);
Image
Figure 9.14: Histogram for Itô formula, 25-term approximation, Image = ]0,1].
Image
Figure 9.15: Histogram for Itô formula, 400-term approximation, Image = ]0,1].

9.6   Calculating with Binary Partitions of RT

The calculations in preceding sections have been done, essentially, for elementary-form observables Y≃ y[RFY], a form which lends itself to visualization by histograms. In contrast, a joint-contingent observable with sample space RT has representation
f(XT) ≃ f(xT) [RTFXT],
and its expected value is
E[f(XT)] = Image f(xT)FxT(I[N]).
This calculation is done by forming partitions and Riemann sums in RT.
Binary (r, q)-partitions of infinite-dimensional domains RT are described in Chapter 3Section 3.5. With T = ]0, t], 2r is the number of partition points Image = j2r of T, and Image + 1 is the number of partition points k2q of Image for each j. As detailed in (3.12), the number of cells in an (rq)-partition Imagerq of RT is Image, which increases greatly as r and q increase. Since this is also the number of terms in the corresponding Riemann sum, the scale and amount of calculation involved quickly escalate as r and q increase.
Take T = ]0,1] and FXT = G, so the process is standard Brownian motion on ]0,1]. The binary partition points of T = ]0,1] are
Image
In Section 7.9 a binary cell K partitioning RT is denoted by
Image
and the value of G on K is denoted by
Image
where, for c = −Image,
Image
the lower and upper limits of integration (ql[j] and qu[j], respectively) being drawn from the cells Kq|kwhich, for each j, partition Image:
Image
With r = 2 the partition points of T = ]0,1] are Imagej = 1, 2, 3, 4; that is,
Image
With q = 2, then, in accordance with (9.6), and for each j, the one-dimensional domain Image has partition points
Image
For 1 ≤ j ≤ 4, the lower and upper integration limits in (9.5), ql[j] and qu[j], respectively, are drawn from (9.7). Therefore, for each j, the corresponding integral in (9.5) can be any one of
Image
By separating (9.7) into two separate lists,
Image
the following Maple code establishes limits ql[j] and qu[j] in domain R]0, 1] for the iterated integrals in (9.5).
Calculation 15
restart :
r := 2:
q := 2:
for j from 1 to 2r do S[j] := j*2r: end do;
s := [seq(S[j], j = 1 .. 2r)];
kmax := q*2(q + 1):
for k from 1 to kmax do qList[k] := − q + (k − 1)*2q : end do:
q1 := [−∞, seq(qList [k], k = 1 .. kmax), q];
qu := [seq(qList[k], k = 1 .. kmax), q, ∞];
This returns the pair of lists in (9.8). In (9.5), with c = −Image and Image − Image = 2r, write
Image
Then
G(K) = βα.
This is calculated in the next section.

9.7   Calculation of Observable Process in RT

Section 7.10 shows how to use binary partitions Imagerq to estimate the expectation of a joint-contingent observable
f(XT) ≃ f(xT)[RTG].
In Theorem 207, a Riemann sum estimate for E[f(XT)] is
Image
where f(rq)(xT) is a step function approximation of f(xT) This section calculates a single term of this Riemann sum. Other terms are calculated similarly.
To simplify the Maple calculation, instead of a binary (rq)-partition suppose T = ]0,1] is partitioned as
Image
and, at each of the partition points of T, suppose R is partitioned as
] − 10,  −0.6],  ] − 0.6, −0.3],  ] − 0.3, 0],  ]0,0.3],  ]0.3,0.6],  ]0.6, 10]
in order to calculate the integral in (9.9). (With slight re-wording, Theorem 209 ensures convergence of the step functions for any regular partition, not just the binary ones.)
Lower and upper limits of −10 and 10 are used here in place of −∞ and ∞. This makes little difference to the calculated values of the Brownian distribution function G(I[N]), as the variance of the Brownian increments is
Image
The corresponding partition of RT consists of 63 = 216 cells K; for example,
Image
Then, with y0 = 0 and
Image
Maple can calculate the distribution function value G(K) = βα, as follows.
Calculation 16
restart:
y[0] := 0:
g := mul(exp(− (3/2)*(y[j] − y[j −1])2), j=1.,3) :
α := evalf10(int(g, y[1]=0.. 0.3, y[2] =−0.3 ..0, y[3]=0.3..0.6)):
β := evalf10((2*Pi)Image * Image):
G := β * α;
0.004336025795
Thus G(K) = 0.004336025795. The Brownian likelihood of other cells can be similarly calculated. For any cell K of the regular partition, any one of the three lower limits of integration in Line 4 of Calculation 16 is any one of the six numbers
10  − 0.6  − 0.3  0  0.3  0.6;
and in each case, the corresponding upper limit of integration in Line 4 of Calculation 16 is
−0.6  −0.3  0  0.3  0.6  10.
Once the lower limits of the integration are established, the corresponding upper limits are then determined.
To make a list of the lower limits, it is necessary to list all possible permutations, with repetition, of the numbers 10,  −0.6,  −0.3,  0,  0.3,  0.6. Maple has a permute command which lists all permutations of nitems, taken r at a time without repetition. The following sequence of commands will generate the listings needed for permutations of six items taken three at a time, with repetition.
Calculation 17
restart;
l[1] := − 10:        l[2] := − 0.6:     l[3] := − 0.3:
l[4] := 0:           l[5] := 0.3:       l[6] := 0.6:
u[l] := − 0.6:       u[2] := − 0.3:     u[3] := 0:
u[4] := 0.3:         u[5] := 0.6:       u[6] := 10:
L := [seq(l[j], j = 1 .. 6)];
                        [− 10, − 0.6, − 0.3, 0, 0.3, 0.6]
U := [seq(u[j], j = 1 .. 6)];
                        [− 0.6, − 0.3, 0, 0.3, 0.6, 10]
p := 0:
for rl from 1 to 6 do
   for r2 from 1 to 6 do
      for r3 from 1 to 6 do
       p :- p + 1:
       P[p] := [L[r1], L[r2], L[r3]]:
       Q[p] := [U[r1], U[r2], U[r3]]:
       end do:
   end do:
end do:
This generates a list of 216 lower integration limits L, and the list Q of corresponding upper integration limits. For instance, the 100th cell K of the 216 member partition of R]0, 1] is determined by the lists
P[100] = [−0.3, −0.3,0.3],     Q[100] = [0,0,0.6],
so
K[100]  =  ] −0.3, 0] × ] −0.3, 0] × ]0.3, 0.6] × Image .
A Riemann sum estimate of the expected value of the observable f(XT) is then given by
Image
the sum being taken over the 216-member partition. Suppose
Image
With the continuous modification of Gf(xT) can be taken to be zero if xT is not continuous. Theorem 209 ensures that E [f(XT)] exists, and that it can be approximated to any degree of accuracy by Riemann sums over regular partitions of RT. For the cell K in Calculation 16, Theorem 214 implies that f(xT) can be taken to be
Image
since x0 = 0, and the values x1 and x2 can be taken to be 0.3 and 0. This reduces to exp(−0.03), or 0.9704455335. Therefore this particular term of the Riemann sum reduces to
exp(−0.03)G(K) = 0.9704455335 × 0.004336025795 = 0.004207876866.
If this calculation is replicated for the other 215 cells partitioning R]0, 1], then the resulting Riemann sum calculation gives an estimated value for E [f(XT)]. The same method can be used to obtain estimates of the diffusion function ψ(ξ, Image), given by the marginal density of expectation
Eξτ [f(XT)].
In this case Image = 1. Suppose ξ = 0.5. Then, in Calculation 16, take
y[3] = ξ = 0.5,
and perform the integration on variables y[l] and y[2] only. The result given by Maple in this case is
G(K)= 0.03999644118,
and the corresponding Riemann sum term is exp(−0.03)G(K) =
Image
The approximate numerical value of E0.5,1(f(XT)) = ψ(0.5, 1) can be built up from calculations like this one.

9.8   Other Joint-Contingent Observables

As further illustration of numerical calculation in sample space RT, suppose the Brownian observable is
f(XT) = sup{x(t),t Image ]0, 1] = T}.
This function of xT is not continuous as x(t)t Image T varies continuously in its separate components. Nor is this function bounded. Therefore Theorem 209 does not guarantee convergence of its Riemann sums over regular partitions, and further investigation into convergence is required.
For similar reasons it is not possible to apply Theorem 214, as it stands, in order—as in (9.10)—to produce a version of f which is more amenable to numerical calculation. But if conditions were such that devices like these could be applied, then, for the cell K of Calculation 16, this particular term of the Riemann sum estimate of E[f(XT] has a value of 0.3 or 0.6—depending on the method of calculation—for the potential data-value f(xT) for that particular cell K. Potential data-values f(xT) for other cells can be obtained similarly.
Much of this book is concerned with potentiality distribution functions FX which can take negative or complex values, such as the Feynman distribution function GImage, with Image = Image. To perform computations such as those in Calculation 16 and (9.11), the integrand can be separated into real and imaginary parts. For the cell K of Calculation 16, with T = ]0, 1], the real part Image (GImage (K)) is α1 + α2, where
Image
This is obtained by separating
Image
into real and imaginary parts, with n = 3 and tj − tj−1 = Image. These iterated integrals can be calculated as in Calculation 16, with the appropriate trigonometric function replacing the exponential function. As in (9.11), a term of the Riemann sum estimate of E [Image(XT)] can be calculated; and again the potential datum-value Image(xT) for each such term can be separated into real and imaginary parts.
For any given ξ Image R and Image Image R+, Riemann sum estimates of the quantum mechanical state function ψImage(ξ, Image) can similarly be obtained, by removing the integration on ξ = yn as in Calculation 16. Likewise, Riemann sum estimates of the values of terms ψImagep(ξImage) of the series expansion of ψImage(ξImage) on which Feynman diagrams are based—see Section 7.22. Theorem 222 then provides convergence conditions for these Riemann sums over regular partitions such as those in Calculation 16.

9.9   Empirical Data

At the start of this book Table 1.1 gives simple numerical data from which an estimate of the distribution function for the observable is deduced. No claim is made regarding the accuracy or reliability of the estimate.
Is it possible to estimate a joint distribution function from sample data xT of a joint observable or random process XT? And why should the attempt be made? This section seeks to provide support for empirical approaches.
In contrast, the basic assumption of Section 8.16 is (8.48). This states that asset price processes ZTsatisfy ZT ≃ zT Image so the increments ln Zt – ln Zt′ are assumed to be
  • independent, and
  • normally distributed.
In Ross [198], the end-of-day price of a barrel of crude oil, as traded on the New York Mercantile Exchange, is listed for each trading day from 3 January 1995 to 19 November 1997. A chi-square statistical test of the independence hypothesis is applied to increments ln Zt − ln Zt′, and the independence hypothesis fails.
In general, independence is intuitively implausible. Realistically it is hard to conceive of any pair of real-world events for which there is no possible connecting chain of causal events, however remote, linking the two together.
Does this mean that the financial theory presented in Section 8.16 is worthless? Independence is a useful mathematical abstraction. The fact that, in geometry, the mathematically perfect circular shape rarely if ever actually occurs in physical reality does not detract from the crucial role of the mathematical circle in understanding the world. But it is unwise, in the “real” world, to gamble on occurrence of a mathematically perfect abstraction.
The assumption that the increments of the logarithms of asset prices are normally distributed has also been challenged. It is argued that large changes in asset prices are much more common than the theory of Section 8.16 predicts.
This section presents series of asset prices from4 Thomson Reuters Datastream [224] in order to assess these issues further. Datastream provides data in Excel format. To import an Excel file FileName. xls into Maple the following Maple commands should be executed:
with(ExcelTools):
Import(FileLocation/FileName.xls):
Figure 9.16 is a Maple graph displaying a series of daily end-of-day share prices of Glanbia, a food processing company. Prices are in pounds sterling. The series consists of 5219 terms, running from 8 March 1991 to 9 March 2011, beginning and ending as follows:
0.8, 0.817, 0.836, ···, 3.782, 3.699, 3.692.
Image
Figure 9.16: Twenty-year graph of Glanbia daily share prices.
Take the logarithm of each of the daily share prices, and calculate the daily increments by subtraction. This gives a list of 5218 log-increments, with graph Figure 9.17 and histogram Figure 9.18. Maple returns a mean value μ = 0.0002930839150 for the 5218 increments of the logarithms of the share prices, with standard deviation σ = 0.0234953419046857.
If the Glanbia share prices satisfy the assumptions of Section 8.16Figure 9.18 should approximate to a normal distribution with mean zero and standard deviation 0.0235.
Does Figure 9.18 look like a histogram of a normally distributed random variable? That is what is assumed in Section 8.16.
A chi-square statistical test can be applied to the data to check whether the Glanbia data satisfy the assumptions of Section 8.16. The null hypothesis is that the sample of 5218 data items of Figure 9.18are observations of a normally distributed random variable with mean μ and standard deviation σ. Then the expected frequencies e are obtained, in accordance with the null hypothesis, by calculating
Image
Image
Figure 9.17: Graph of log-increments of Glanbia share prices.
Image
Figure 9.18: Histogram of log-increments of Glanbia share prices.
A simple Maple calculation gives the actual, or observed, frequencies o of the Glanbia data. The observed and expected frequencies are given in Table 9.1.
The tabulation confirms the impression given by Figure 9.16, that extreme changes in share prices are more common than what is predicted under the lognormality assumption.
The chi-square value is
Image
With seven degrees of freedom, the critical values of χ2 for significance levels 0.05, 0.01, and 0.001 are 14.07, 18.48, and 24.32, respectively. Therefore the null hypothesis should be rejected. The evidence of the Glanbia data contradicts the hypotheses of Section 8.16. The Glanbia data demonstrate the “black swan” phenomenon—events which are rare in theory are common in practice.
Figures 9.19 to 9.24 refer to daily series of share price data, each series of twenty years’ duration, selected miscellaneously from Thomson Reuters Datastream [224].
Figures 9.21 and 9.24 show significant numbers of extreme outlying values. Superficial inspection of this miscellaneous selection of share price data indicates that the hypotheses of Section 8.16 cannot be assumed to be valid. In particular, it cannot be assumed, without verification, that a geometric Brownian distribution function Image is applicable to series of share prices. And even if some modified version of Image is used, it is still necessary to verify, test, or otherwise ensure that the distribution function fits the data.
Image
Table 9.1: Calculation of chi-square value for Glanbia data.
Image
Figure 9.19: Graph of Rio Tinto share prices.
Image
Figure 9.20: Graph of log-increments of Rio Tinto share prices.
Image
Figure 9.21: Histogram of log-increments of Rio Tinto share prices.
Image
Figure 9.22: Graph of GlaxoSmithKline share prices.
Image
Figure 9.23: Graph of log-increments of GlaxoSmithKline share prices.
Image
Figure 9.24: Histogram of log-increments of GlaxoSmithKline prices.

9.10   Empirical Distributions

Even if the finance theory of Section 8.16 cannot be relied on, it may still be possible to analyze the random variability properties of empirical data.
Section 8.16 is predicated on a particular analytic distribution function Image. Various modifications of this function are sometimes invoked to deal with the mismatch between the theory and the empirical data such as those in Section 9.9. But it is difficult to imagine a reason why the likelihood distribution—if it exists—of any series of empirical data must have some analytic representation. There seems to be no a priori justification for assuming that there is an “algebraic formula” for such likelihoods, nor that any algebraic formula will give accurate values of the joint likelihoods.
Does this mean that it is impossible to carry out accurate analysis of the random variability of such data?
If there is no underlying likelihood Image operating on the joint data series, then the series is not an observable, and the methods of this book have nothing to offer in the form of guidance. The entrails of a goat would be just as good.
But if there is an underlying joint likelihood Image, then it can be estimated numerically, and the likelihood estimates can then be used, as in Sections 9.7 and 9.8, to estimate expected values, marginal expectation densities, and the like—just as we sought to use the distribution function Image. The accuracy of the results will then depend on the accuracy of the estimates of Image.
In Brownian motion there is a theoretical foundation—such as Theorem 217—which gives some guarantee of the ultimate accuracy of the results of estimates based on regular and binary partitions of RT. Such estimates are good when the processes involved are Brownian or geometric Brownian, and when the contingent observable integrand function f satisfies the conditions of Theorem 217. But what guarantee of accuracy is there when these assumptions are dropped?
Brownian and geometric Brownian processes (Xt)t Image T are defined for continuous t, and the data-values xt are continuous and unbounded. Therefore results such as Theorem 217 are needed as a basis for discrete-time, discrete-value estimates from regular or binary partitions of RT.
But the empirical price processes (Xt) of Section 9.9 do not have continuous t. As recorded in Thomson Reuters Datastream [224], t is discrete, and each datum value xt is constant from one end-of-day observation to the next. And even if the constantly changing moment-to-moment prices are taken into account, these too are momentarily constant from one “time-tick” to the next—unlike the values taken by a Brownian or geometric Brownian process.
Also, the prices xt are discrete, not continuous, since these are units of money and are not infinitely divisible. The value of xt is bounded below by zero, and while theoretically there is no upper bound, in practice some very large upper cutoff value can be applied. So in any finite interval of time there is only a finite number of time-ticks t, and there is, in effect, only a finite number of possible values xt.
In other words, while the insights and perspectives of advanced mathematics can be helpful, investigation of the random variability of share prices is inherently simpler than Brownian motion and quantum mechanics. In fact, results like Theorem 217 are irrelevant if, when analyzing price processes, the false assumption of continuous time/continuous values is abandoned, and the more realistic discrete view is adopted.
Thus, if numerical estimates of joint likelihood Image are used, the accuracy of any final calculation depends on the accuracy of the estimates. In other words, accuracy of results depends on the amount of effort, insight, and skill used in estimating joint likelihoods.

9.11   Calculation of Empirical Distribution

Though detailed investigation of such estimates of joint likelihood is beyond the scope of this book, some rudimentary calculation is presented here for the Glanbia series from Thomson Reuters Datastream [224].
The series starts on Friday 8 March 1991, with Glanbia share price standing at 0.8 at the end of that day, and concludes on Wednesday 9 March 2011, with share price 3.692. Let Image′, Image) denote the start and finish date, respectively, of this series, and write T = ]Image′, Image].
Assuming the existence of joint likelihood potentiality Image for the Glanbia prices, the series xT from Thomson Reuters Datastream [224] is a joint datum of basic observable
Image
Now suppose Wednesday 9 March 2011 is the present, and suppose a contract in Glanbia shares matures at a point four months in the future (in fact, sixteen weeks later), at end of Wednesday 29 June: in other words, after sixteen five-day trading weeks.
Assuming continued existence of joint likelihood potentiality Image for the Glanbia prices, let t represent the “future” date Wednesday 29 June 2011 (end-of-day), so Image = ]Imaget] represents the eighty-day trading period 9 March 2011 to 29 June 2011. Then the Glanbia joint observable for the period is
Image
where, for I[NImage I (RImage), the distribution function values are
Image
To perform some analysis of the joint random variability of the share prices in the forthcoming sixteen weeks, Thursday 10 March 2011 to Wednesday 29 June 2011, estimates of the joint distribution function FX, = FXImage, are needed.
Suppose Image is partitioned by times s1s2, and s3, each time increment being twenty days or four five-day trading weeks. Then s0 = Image is the present (end of 9 March 2011), s4 = t is 29 June, s1 represents end of Wednesday 6 April 2011; and so on.
Suppose the daily end-of-day prices are partitioned by intervals of length one-tenth of a pound. Then the domain RImage = R]Imaget] is partitioned by cells
Image
where each Ij has the form ]dd + 0.1]. If marginal densities at time t = s4 are required, then a typical partitioning cell for domain RImage = R]Image,t[ is
K = I1 × I2 × I3 × RImage\{s1s2s3},
where x(t) is now some given, fixed positive number. “Present time” is s = Image; and present price is xr = x(Image) = 3.692. Using intervals of one-tenth of a pound, the components of K could be
Image
The next task is to find an estimate of the joint distribution function value FX(K), using the historic joint data-values of the series xT from Thomson Reuters Datastream [224].
One way of making such an estimate is, by examining the series values from 8 March 1991 to 9 March 2011, to find the proportion of occasions when, at four-week intervals, the prices had increments or transitions corresponding to those in the partitioning cell K.
Subtracting 80 from 5219, the number of such tests or comparisons that can be made in the component elements of xT is 5139. Thus, for each of 5139 days, count the number of occasions that, after a further twenty days, the comparison price increases by no more than 0.1, followed by a decrease of no more than 0.1 from the comparison price after 40 days from the comparison date; and so on.
The Maple code in Calculation 18 below evaluates this proportion.
Fgb is the estimated value of FX(K). In this case, fgb = 25, so Maple finds 25 occurrences of joint increments or decrements of the kind appearing in K, and returns
FX(K) = 0.004864759681
as the value of Fgb. Other cells K in the partition can similarly be calculated. A Maple listing of the cells in the partition can be generated by permutation-with-repetition, as in Calculation 17. Once the distribution function estimates FX(K) are found for the cells of the partition, then expected value of a contingent observable f(XImage) can be estimated; also marginal density of the expectation.
To obtain a risk-neutral model it is fairly simple to adjust the distribution function estimates in order to simulate, in expectation estimates, a specified growth rate such as the riskless rate of interest ρ.
Calculation 18 is not necessarily suitable5 for every series, and there are many ways in which estimates can be improved. But whatever method is used, care must be taken to ensure that the distribution function values satisfy consistency. The counting procedure in Calculation 18 ensures consistency in this case.
To see how consistency is ensured in this example, suppose dimension s1 is partitioned, with
Image
where the cells Image are disjoint. The relative frequency counting argument ensures that FX (I2 × I3 × I4 × R]0, 1]\{s2s3s4}) is given by
Image
Conditioning is built into Calculation 18, so dubious assumptions of the independence of log-increments kind are averted.
What about other potentially bogus assumptions? It is possible the assumptions (what are they?) in Calculation 18 are invalid, in which case some other estimation method must be tried. But assumptions that might perhaps be valid in some circumstances are preferable to assumptions that are demonstrably false.
Calculation 18
for j from 1 to 5139 do
   if gb[j + 20] > gb[j] and gb[j + 20] ≤ gb[j] + 0.1
      and gb[j + 40] > gb[j] - 0.1 and gb[j + 40] ≤ gb[j]
      and gb[j + 60] > gb[j] and gb[j + 60] ≤ gb[j] + 0.1
      and gb[j + 80] > gb[j] + 0.1 and gb[j + 80] ≤ gb[j] + 0.2
   then fgb := fgb + 1:
   end if:
end do:
Fgb := evalf10(fgb/5139);
__________
1 Unfortunately, the corresponding joint-contingent observables f(X) ≃ f(x)[RTFX] do not have such a convenient and familiar visual representation in multiple dimensions. Figure 1.3 shows the limitations, even when there are only two random variables involved.
2 This is the displacement-time picture of a point Image of the Cartesian product RT.
3As previously mentioned, the extension of measurability described in Section A.l of the Epilogue may provide a simpler and more efficient way of dealing with this.
5Calculation 18 is, essentially, the relative frequency counting procedure of Table 1.1 with which this book commences.

A.1   Measurability

Without invoking a theory of measure, Chapter 4 demonstrates that Riemann sums, constructed from point/cell/dimension-set association and regulated by gauges, deliver a theory of integration which has the characteristics, properties, and theorems required for the purposes of real analysis, and in particular for the purposes of analysis of random variability.
Sections 4.7 and 4.8 of Chapter 4 introduced variation as a form of outer measure of subsets S of RT. In Section 4.9 this concept was extended from sets S in RT to sets S × Image in RT × Image, and the extended concept of outer measure was used in Theorem 52. This in turn was neded when establishing the integrability of functions that arise in quantum mechanics—in Theorem 219, for instance.
The Henstock or Burkill-complete integral of a function h of elements xNI[N], structured by a multifunctional relation of association between these elements, is defined by forming Riemann sums of the function values, the terms of the Riemann sums being selected by “gauge” rules. Thus the concept of measurability is not required in order to define the integral. However, in proving certain properties of the integral, or in calculating the integrals of particular functions, the analytical device of step functions can be very useful.
Measurability is related to the construction of step functions and limits of step functions. These are formed from measurable sets, and they are measurable functions. Existence of the integrals of particular functions can often be established by means of step functions, using measurable sets and measurable functions. Accordingly, this section considers measurability of sets and functions in RT, in a further extension of the meaning given to the measurability concept in Section 4.9.
Suppose h(xNI[N]) is a real- or complex-valued function, and suppose A is a subset of Image defined as follows. Given
Image
an element (xNI[M]) of A satisfies x Image AImage; so
Image
Some of the elements (xNI[M]) of A may be associated, with M = N and x(N) a vertex of I(N) in RN; where I(t) is a proper subset of R for t Image N, and I(t) = R for t Image T\N. But it is not assumed that the elements (xNI[M]) of A are associated.
Measurability has been defined and used in preceding chapters. The following definition extends the meaning of measurability.
Definition 58  Suppose a function h of associated elements (xNI[N]) is given. Then a set A is h-measurable if the function
1A(xNI[N])h(xNI [N])
is integrable on RT.
Example 70  To relate this definition to preceding chapters, take h to be a distribution function F defined on I(RT), and let A be a subset of RTThen A is F-measurable if and only if 1A(x)F(Iis integrable on RTIn terms of Definition 58, take
Image
so Image and ImageThen 1A(x)F(Iis integrable if and only if 1A(xNI[N])F(Iis integrable.          Image
Example 71  If none of the triples (xNI[N]) Image A = Image is associated, then, for any function h(xNI[N]), the set A is not h-measurable. This is because the function 1A(x)h(xNI[N]) is not integrable if A has no associated elements.          Image
If 1A(xNI[N])h(xNI[N]) is integrable on RT, write
Image
If h(xNI[N]) is integrable on RT, write
Image
The following results show that the function Ph has some probability-like properties.
Theorem 247  If A is h-measurable and if h is integrable on RT then \A is h-measurable and
Ph[\A] = Ph [RT] – Ph [A].
Proof. We have
Image
By assumption, both functions on the right are integrable on RT, so the result follows by Theorem 13.          Image
Theorem 248  If A1A2 are disjoint h-measurable subsets of RTthen A1A2 is h-measurable, and
Ph [A1 ⋃ A2] = Ph [A1] + Ph [A2].
Proof. Write A = A1 ⋃ A2. We have
Image
Image
giving the result.          Image
This result can be extended to any finite union of disjoint sets. For the union of a countably infinite family of disjoint sets the following holds.
Theorem 249  Suppose h and |hare integrable on RTIf A1A2A3, . . . are disjoint h-measurable subsets of Imagethen Image is h-measurable, and
Image
Proof. Let Image and let Image. Then, for each (xNI[N]), as k → ∞,
Image
Given ε > 0, for each (xNI[N]) there exists kε so that k ≥ kε implies
|1A (xNI[N])h(xNI[N]) – 1Bk (xNI[N])h(xNI[N])| < |h(xNI[N])|
with
1Bk (xNI[N])h(xNI[N]) < |h(xNI[N])|
for each k. By hypothesis, |h(xNI[N])| is integrable on RT, so Theorem 61 (dominated convergence theorem) gives the result.          Image
Example 72  For each j, suppose Image as in Example 70, the Aj being disjoint. Then Theorem 249 gives
Image
The countable additivity of the function Ph on disjoint measurable sets is, ultimately, a consequence of the properties of Riemann sums restricted by gauges.
Next, we define h-measurability of a function f.
Definition 59  Given a real-valued function f(xNI[N]) and a real- or complex-valued function h(xNI[N]), we say that f is h-measurable if, for each a Image Reach of the sets
Image
is h-measurable.
If f is complex-valued then its h-measurability is defined by taking the real and imaginary parts of f.
Theorem 250  Suppose h(xNI[N]) is real-valued, with h and |hintegrable on RTSuppose f(xNI[N]) is real-valued, with fh, f|h|, and |fh|, integrable on RTThen f is h-measurable.
Proof. Suppose υ Image R. Write
k(xNI[N]) = |f(xNI[N])h(xNI[N])| + |υh(xNI[N])|.
The function k is integrable because, by hypothesis, each of the two functions of which it is composed is integrable. We have
Image
By Theorem 59, the functions
Image
are integrable for every υ Image R. Thus, for u < υ, the function p(xNI[N]) given by
max {min {f(xNI[N])h(xNI[N]),  υh(xNI[N])},  uh(xNI[N])} is integrable. We have
Image
whenever, respectively,
Image
Let S denote
{(xNI[N]) : f(xNI[N]) ≥ υ}.
Write
q(xNI[N]) = p(xNI[N]) – uh(xNI[N]).
For u → υ–,
Image
monotonically; so for any ε > 0 there exists uε so that υ > u > uε implies
Image
Therefore, by Theorem 571s(xNI[N]) is h-integrable for every υ Image R. Then
Image
is h-measurable for every υ. The other conditions required for h-measurability of f are proved similarly.          Image
Theorem 251  Suppose h and |hare integrable on RTSuppose A1A2 are h-measurable subsets of ImageThen A1 ∩ A2 and A1 ⋃ A2 are h-measurable subsets of Image.
Proof. The functions 1AJ (xNI[N])h(xNI[N]) are integrable on RT for j = 1, 2. Therefore, with
f(xNI[N]) = 1A1(xNI[N]) + 1A2(xNI[N]),
the function f(xNI[N])h(xNI[N]) is integrable. Since this function satisfies the conditions of Theorem 250, the set
{(xNI[N]) : f(xNI[N]) ≥ 2}
is h-measurable. In other words A1 ∩ A2 is h-measurable. Then
1A1 ⋃ A1 (xNI[N]) = f(xNI[N]) – 1A1 ∩ A2 (xNI[N]),
and this is h-integrable, giving the second result.          Image
Theorem 252 Suppose h and |hare integrable on RTIf, for each a Image Rany one of the sets in Definition 59 is h-measurable, then the other sets are also h-measurable; and the function f(xNI[N]) is h-measurable.
Proof. Denote the sets in Definition 59 by S1S2S3S4, respectively. Assume S1 is h-measurable. Then
Image
Since h and 1S1 h are integrable on RT, so is 1S2 h. As this holds for each a Image RS2 is h-measurable. Given a Image R, for j = 1, 2, 3, . . . , let
Image
and, for k = 1, 2, 3, . . . , let Image. As k → ∞,
1Bk(xNI[N]) → 1S3(xNI[N]).
Let
gk(xNI[N]) = 1Bk(xNI[N])h(xNI[N]).
For each kgk is integrable on RT, with
|gk(xNI[N])| ≤ |h(xNI[N])|
Given ε > 0, there exists kε so that k ≥ kε implies
Image
As |h| is integrable, Theorem 61 (dominated convergence theorem) implies that
Image
is integrable on RT, so S3 is h-measurable; and this holds for each a Image R. Finally,
Image
giving h-measurability of S4. Thus h-measurability of S1 implies h-measurability of Sj for j = 2, 3, 4. The other cases are proved similarly.          Image
Theorem 253  Suppose h(xNI[N]) is real- or complex-valued, with h and |hintegrable on RTand suppose f(xNI[N]) is h-measurable. Given α Image Rthe functions f(xNI[N]) + α and αf(xNI[N]) are h-measurable.
Proof. For each a Image Rf(xNI[N]) < a implies f(xNI[N]) + α < a + α and
αf(xNI[N]) ≤ αa   or   αf(xNI[N]) ≥ αa
depending on whether α and a are, respectively, positive, negative, or zero. The result then follows from Theorem 252.          Image
Theorem 254  Suppose h(xNI[N]) is real- or complex-valued, with h and |hintegrable on RTand suppose f(xNI[N]) is h-measurable. Then the functions (f(xNI[N]))2 and |f(xNI[N])| are h-measurable. Suppose, in addition, that f ≠ 0. Then Image is h-measurable.
Proof. If a < 0 then Image is the set
{(xNI[N]) : |f(xNI[N])| > a}, = R,
which is h-measurable since h is integrable. Suppose a ≥ 0. Denote the sets
Image
by A1A2A3A4, respectively. Each of these sets is h-measurable by hypothesis. Then
Image
each of which is h-measurable. Write
Image
With f ≠ 0, the set
Image
so Image is h-measurable.          Image
Theorem 255  Suppose h and |hare integrable on RT and suppose each of the functions f(xNI[N]) and g(xNI[N]) is h-measurable. Then the set
A = {(xNI[N]) : f(xNI[N]) < g(xNI[N])}
is h-measurable.
Proof. Let {a1a2a3, . . .} be an enumeration of the rational numbers in R, and let
Image
Then Image and, as k → ∞,
Image
with |1Bk (xNI[N])h(xNI[N])| ≤ |h(xNI[N])| for each k, where both 1Bk (xNI[N])h(xNI[N]) and |h| are integrable. Given ε > 0 we can find kε so that, for k ≥ kε,
|1A(xNI[N])h(xNI[N]) – 1Bk(xNI[N])h(xNI[N])| < ε|h(xNI[N])|.
Therefore by Theorem 61 (dominated convergence theorem),
1A(xNI[N])h(xNI[N])
is integrable on RT, giving the result.          Image
Theorem 256  Suppose h and |hare integrable on RTand suppose the functions f and g are h-measurable. Then the functions f – g, f + g, and fg are h-measurable. If g ≠ 0 then Image is h-measurable.
Proof. For any a Image R, the function a + g(xNI[N]) is h-measurable. Then, by Theorem 255, the set
Image
is h-measurable, so the function f – g is h-measurable. Since f + g = f – (–g), the function f + g is h-measurable. Since
Image
the preceding results imply that the function fg is h-measurable. For the final part use Theorem 254.          Image
Theorem 257  Suppose h and |hare integrable on RTand suppose the real-valued functions fj are h-measurable for j = 1, 2, 3, . . . . If, for each (xNI[N]),
fj(xNI[N]) → f(xNI[N])
as j → ∞, then f(xNI[N]) is h-measurable.
Proof. Given ε > 0 there exists jε so that, for j > jε,
|f(xNI[N]) – fj(xNI[N])| < ε.
For any a Image R write A = {(xNI[N]): f(xNI[N]) > a}. Then, for each (xNI[N]),
|1Afjh – 1Afh| < ε|h(xNI[N])|;
the function 1A(xNI[N]) fj (xNI[N])h(xNI[N]) is integrable for each j; and, for each (xNI[N]),
1Afjh → 1Afh
as j → ∞. Theorem 61 then implies that the function
1A (xNI[N]) f (xNI[N]) h (xNI[N])
is integrable. Since this holds for each af is h-measurable.          Image
The following is a partial converse to Theorem 252.
Theorem 258  Suppose k(xNI[N]) ≥ 0 is h-integrable on RTIf real-valued f(xNI[N]) is h-measurable with f ≥ k then f is h-integrable on RT.
Proof. We have
f(xNI[N]) + k(xNI[N]) ≤ 2k(xNI[N]),
so, if we replace f by f + k, we can take f(xNI[N]) ≥ 0. For each positive integer m write
Image
and define
fm(xNI[N]) = aj   for   aj ≤ f(xNI[N]) < aj + 1j = 0, 1, . . . , m2m,
with fm(xNI[N]) = 0 if f(xNI[N]) ≥ m2m. Then
fm(xNI[N]) ≤ f(xNI[N]) < fm + 1(xNI[N])
and, for each (xNI[N]), fm (xNI[N]) is monotone increasing as m increases. By measurability of ffm is h-integrable, and, writing
Aj = {(xNI[N]) : aj ≤ f(xNI[N]) < aj + 1},
we have
Image
Then fm → f monotonically as m → ∞, and, given ε > 0, for each (xNI[N]) there exists mε so that m > mε implies
|f(xNI[N])h(xNI[N]) – fm(xNI[N])h(xNI[N])| < εk(xNI[N]).
Provided f ≥ 0 Theorem 57 then implies fh is integrable on RT. If f does not satisfy f ≥ 0 we can apply the preceding argument to the function f + k. Then the h-integrability of k and f + k implies that (f + k) – k is h-integrable.          Image
Instead of assuming |f| ≤ k the preceding result remains valid if it is assumed that
  • f ≥ 0 and
  • Image is bounded as m → ∞.
Theorem 259  Suppose f(xNI[N]) is real-valued and f is h-measurable, and suppose g is a continuous real-valued function defined on domain RThen the composite function g o f is h-measurable.
Proof. If J Image I(R), then, by continuity of gg–1(JImage I(R). Write g–1(J) = K. By h-measurability of f, the set
Image
is h-measurable. Thus, for each J Image I(R), the set
Image
is h-measurable.          Image

A.2.   Historical Note

The background to most of the themes in this book is amply documented in books and articles. The purpose of this note is to draw together some of the newer themes.
The neo-Riemannian approach to the theory of integration associated with R. Henstock, J. Kurzweil, and E.J. McShane originated in the 1950s. The concepts of generalized Riemann (or -complete) integration appeared in 1955 [85], where they provided a solution1 to a problem of integrals converging or diverging in the presence of convergence factors, a problem first posed by Henstock to his Ph.D. supervisor Paul Dienes in 1944. Using his new techniques of Riemann-, Stieltjes- and Burkill-complete integration, Henstock solved a number of problems in papers published in the 1950’s and 1960’s, culminating in 1963 [93], Theory of Integration, which gives a comprehensive account of an early version2 of the Burkill-complete integral.
Addressing the 1962 International Congress of Mathematicians in Stockholm, Henstock declared: “Lebesgue is dead!”. Nonetheless, Henri Lebesgue (1875–1941) set the standard [137138] for integration in the twentieth century. But Henstock’s flamboyant announcement that a new, deeper, and more effective theory of integration had arrived was already being given additional substance by J. Kurzweil, who independently discovered Riemann-complete integration, and whose 1957 paper [129], Generalised ordinary differential equations and continuous dependence on a parameter, was creating a new paradigm for the theory of differential equations.
However, this was not the end of the story, just the beginning. The Riemann-complete integral provides a resolution of the problem of integrating derivatives, a significant but relatively small area of the subject — one that had already been resolved [82] by Denjoy and Perron, using other methods.
Integrability of derivatives is part of the broader subject of integrability of functions, strands of which can be found [101102] in the work of authors such as C-J. de la Valleé Poussin, W.H. Young, J.C. Burkill3, L.C. Young, A.J. Ward, and S. Saks. Prom the point of view of this book a particularly important strand is the role of integration in the theory of approximation.
About 1962, a physics researcher introduced Henstock to Feynman integrals. These constructs had been used successfully in quantum electrodynamics and quantum mechanics since 1948. But no adequate mathematical method had been found to establish whether or not they “converge”; that is, whether or not they actually exist in a specifically mathematical sense.
Feynman “integrals” worked in practice but not in theory, so to speak. The Lebesgue integral method could not resolve this issue. Could the new developments in integration theory do it?
Independently, E.J. McShane [11] engaged with this problem. His search for a rigorous mathematical foundation for quantum field theory led him to sustained and productive involvement in the theory of integration. Seminal works by McShane include Order-Preserving Maps and Integration Processes in 1953 [153], Integrals devised for special purposes in 1963 [155], and Stochastic Calculus and Stochastic Models in 1974 [158] which has influenced much subsequent research in this area.
With such issues in mind, over the following decade Henstock embarked on construction of a more complete and comprehensive theory of integrability. Page 221 of Henstock [94] has a sketch of the historical background to some of these developments. His paper [101] (The Lebesgue syndrome) is also useful.
In 1968 Henstock published Linear Analysis [94], containing an initial exposition of the division system (or integration basis) theory described in Chapter 4 of the present book, and designated here as the Henstock integral. The Riemann-complete integral is a modification of the nineteenth century Riemann integral with more delicate selection of Riemann partitions. The Henstock integral is based on
  • classes of objects to be selected for partitions, and
  • classes of rules for determining selections.
This system ties the integral to partitions (or divisions), while setting it free from any particular method of integration such as Riemann’s or Lebesgue’s. The basic idea can be described simply as the Riemann sums method in integrability.
Linear Analysis [94] also included an outline (Exercises 43.12 and 43.13, pages 223−224) of integration in infinite-dimensional spaces, partly inspired by a 1934 paper [116] by B. Jessen.
The Henstock integral, or division system, was presented more fully [96] in 1969. Henstock’s 1973 paper [97] showed how the system can be used to define the Feynman integral as a manifestation of an extended version of Brownian motion, thus bringing this construct into the realm of mathematical integration and probability—which is the spirit in which it was originally presented by Feynman in 1948 [64].
Henstock [97] is a historic landmark in the theory of integration and probability. But definition is one thing, existence is another. It was not shown that the Feynman integrals exist. And if they do happen to exist, means of performing more than a finite number of operations on them were not provided. The question of taking limits under the integral sign for highly oscillating integrands such as these was broached in this paper, but not resolved there. Henstock concluded [97] with the statement: “This shows how difficult it will be to introduce Lebesgue-type limit theorems into Feynman integration theory. ... The present paper is only the beginning of the theory.”
A General Theory of Integration in Function Spaces (Muldowney [162], 1987) extended the division system idea, providing a broader structure of associated elements of integration with the capacity to deal with a large class of integrands in infinite-dimensional spaces. This is, essentially, the extension of the Henstock integral used in the present book. It was established in Muldowney [162] that the Feynman integrals of step function integrands exist. But to go further requires taking limits under the integral sign.
Criteria for taking limits under the integral sign—actually a departure from the “Lebesgue-type limit theorems” mentioned in the conclusion of Henstock [97]—were constructed by Henstock for the division system theory in his 1991 book The General Theory of Integration [105]. These criteria (similar to Theorems 6264, and 65 in this book) were used by Muldowney in 2000 [165] to establish properties of the Feynman integral.
The partitioning theorem (Theorem 4) used in this book was proved by Hen-stock, Muldowney, and Skvortsov [107]. This result simplifies and strengthens the definition of the Henstock integral in infinite-dimensional spaces.
The following assessment of the Feynman integral problem was given by Gian-Carlo Rota in his 1997 book Indiscrete Thoughts [199]:
The Feynman path integral is the mathematicians’ pons asinorum. Attempts to put it on a sound footing have generated more mathematics than any subject in physics since the hydrogen atom. To no avail. The mystery remains, and it will stay with us for a long time.
The Feynman integral, one of the most useful ideas in physics, stands as a challenge to mathematicians. While formally similar to Brownian motion, and while admitting some of the same manipulations as the ones that were made rigorous long ago for Brownian motion, it has withstood all attempts at rigor. Behind the Feynman integral there lurks an even more enticing (and even less rigorous) concept: that of an amplitude which is meant to be the quantum-mechanical analog of probability ...
Rota went on to say: A concept similar to that of a sample space should be brought into existence for amplitudes and quantum mechanics should be developed starting from this concept.
The brief historical outline in this section is a summary of developments in the agenda described by Rota—a work-in-progress but no longer a mystery.
__________
1Theorem 67 of this book is, essentially, Theorem 1 of Henstock [85].
2Henstock’s Theory of Integration uses the term “Riemann-complete” to designate integrals which, in this book, are distinguished from each other as Riemann-complete, Stieltjes-complete and Burkill-complete.
3In [103] Henstock cites the work of W.H. Young, as developed in lectures by J.C. Burkill, as a source of the inspiration for his proof by Riemann sums of Fubini’ s theorem. Henstock attended a course of lectures by J.C. Burkill in 1942. His Ph.D. thesis, Interval Functions and Their Integrals (1948), is an investigation and extension of J.C. Burkill’s idea of the integral.

No hay comentarios:

Publicar un comentario

zen consultora

Blogger Widgets

Entrada destacada

Platzy y el payaso Freddy Vega, PLATZI APESTA, PLATZI NO SIRVE, PLATZI ES UNA ESTAFA

  Platzy y los payasos fredy vega y cvander parte 1,  PLATZI ES UNA ESTAFA Hola amigos, este post va a ir creciendo conforme vaya escribiend...