Dedicated to the Northern Forest, that in a ring shape around the Arctic, unites so many lands with its beauty.

Dedicated also to Buckminster Fuller whom I quote here:

  • The Things to do are: the things that need doing, that you see need to be done, and that no one else seems to see need to be done. Then you will conceive your own way of doing that which needs to be done — that no one else has told you to do or how to do it. This will bring out the real you that often gets buried inside a character that has acquired a superficial array of behaviors induced or imposed by others on the individual.
    • Letter to "Micheal" (16 February 1970) Micheal was a 10 year old boy who had inquired in a letter as to whether Fuller was a "doer" or a "thinker".

Buckminster Fuller worked on a minimum system of concepts and processes contemporary information, in many disciplines, that would lead people think, and act so as to have a satisfatory physical and metaphysical life on the planet earth and avoid extinction of life on the  planet. The individuals integrity was a crucial part for this to have success. The interconnected disciplines were, astronomy, mathematics, philosophy and metaphysics, engineering, design , architecture, poleodomia, arts, economics, politics, sociology, education etc.


See also a Lecture in the National Technical University of Athens during June 2011.




And a special blog about the present ideas at




 The next three axiomatic systems of the classical historical and foundational logical systems in mathematics have many finite models of various sizes! This immediately proves the consistency and non-vain character of the axiomatic systems. The smaller size layer is always the observable, visible or phenomenological, or logical and expressional interface-layer of the system. The larger layer is the ontological or hidden or invisible, an analogue to the hidden by the programmer, file and procedures structure of a software from the user. The axioms are stated sometimes for the one layer other times for  the other, and also in combination of the two layers. These choices give a totally new way of thinking,  observing with our senses,  feeling, and  handling,  our creations, compared to the traditional mathematical thought! It is a new experience. For the first time after 2000 years e.g. reasoning in Euclidean geometry, may acquire new consistency of  mind and  observation  with the human senses. For example an axiom that between two different points there is always a third, is not consistent with observation with the senses. In the new present mode of the axioms, there is no such an axiom! Also the points do have a finite size, as it is confirmed by the senses, and are not of no size at all! 

In the mathematics of  this layer the complete elimination of the concept of infinite, requires a new technique in observing with the senses, feeling, acting and reasoning. This technique is more sophisticated than the usual techniques in classical mathematics. The reasoning and observing with the senses, (this applies especially to geometry or disciplines that the object of study is pictorial) is in a complete consistency with the reasoning for the smaller or external or visible layer, while there is  also reasoning for a layer which is completely invisible or non-observable , larger or internal. Still the internal invisible layer which is invisible under normal conditions can become visible and observable  too, with  pictorial representation, under exceptional enhanced conditions. The latter guarantees the validity of the reasoning for the non-observable layer.

 They are natural too after the human-computer interface interaction in the technology of multimedia. From this point of view e.g. the concept of the abstractness of the infinite is also, a measure of how limited are the logical expressional means and informational handling of the system of mathematical entities, which can always be assumed finite in the ordinary sense at the ontological layer. Thus some arguments became simple and elegant in such systems (like a nice and friendly interface of a complicated software system) , but this should not be pushed to its limits, and other types of properties of the same finite system require different axiomatization which has to be devised or updated from time to time! We follow here the simple interplay of the observable layer with the non-observable layer, that is suggested by the senses (audio, visual and touching thresholds) , the physics of handling objects, and the logic of creating them by software in the computer screen. This suggests specific axioms for the interplay of the two layers. Other means of observing, like for example those defined by specific system of physical experimental devices for the micro-world, would result in to different choice of axioms. But in this first approach we use the definition of "observable" by the direct sense-inspection of a statistical standard human being.  Thus the only abstraction of the axioms refers, to the size of each layer. Even the concept of being finite is layer depending. If in such a system of axioms are added some axioms that specify the exact finite cardinality, of each layer, and their mutual relation, called the key-axioms then it becomes a categorical axiomatic system, as it has one and only one finite model  that satisfies it! The intended relation of these axiomatic systems with the corresponding classical axiomatic systems is mainly described with the next facts:

1) They are interpretable and consistent: All these new axiomatic systems have finite models definable within the corresponding old and familiar axiomatic system. In their full version of all (also the key) axioms, they can have only finite models. But by eliminating some axioms, they can also have models described in the old-familiar axiomatic systems as infinite in one of  two layers, the hidden or the observable layer, or in  both.

2)If possible they should be  logically  adequate and finite-wise complete relative to the old mathematics in the next sense: Any proposition provable in the old-familiar axiomatic systems as holding for all the finite models of the new systems, is also provable within the new axiomatic systems. This property may make some researchers to think of the completeness of the 1st order Logic or the Forcing method in set theory. It is not sure that we may have this property without using also axiom-schemes in the new axiomatic systems. A different version would be that any property provable in the old-familiar axiomatic systems as holding for all models that the observable layer is  finite set in the model of the new systems, is also provable within the new axiomatic systems. 

3) If possible they should be logically  adequate and complete as categorical: After adding the key-axioms that specify one and only one finite model for each of them, then any proposition provable in the old-classical axiomatic system, for this specific finite model is also provable in the new categorical axiomatic system. This completeness does not seem to require to make use of axiom-schemes in the new axiomatic-systems as the categorical versions with the key axioms do not require axiom-schemes.

We notice here that these relations are way different than those used by non-standard analysis and mathematics. There is no way, that any theorem of the old-standard mathematics, can be transferred in the new mathematics! At first the very axioms of the old-standard mathematics are abandoned in the new mathematics! But all the finite and substantially practical content of the old-mathematics can indeed be saved in the new mathematics!

4) If possible all the axioms of the axiomatic systems are stated within 1st order formal languages, and Logic! It is possible, that in their categorical version with the key axioms,  there are no axiom-schemes, or 2nd and higher order formulae for the axioms. Axiom schemes as that of induction in the natural numbers, or of the supremum (continuity) for the real numbers, or replacements etc for set theory etc, are  easily provable theorems in the new axiomatic systems, deducible from simple 1st order logic axioms . This is possible as the new categorical axiomatic systems have only finite models! Thus the advantages of the completeness of first order logic, is fully used in logical reasoning. But we may not have the property 2) above for them when stated without the key axioms.

5) The relation and sizes of natural numbers used in the meta-mathematical level of the Formal Language and Logic with the  natural numbers used in the objective language of the axiomatic  theory is a carefully  discriminated role,  is discussed and specified, and is responsible for the creation of the concept of "operational infinite" within the finite universe of layer 1 of mathematics. 

It is standard practice that we make use of the natural numbers in the meta-mathematical level of say the mathematics of the formal language of the theory (e.g. counting propositions formulae etc) and the natural numbers inside the objects of the theory (e.g. counting rational numbers or , sets, or straight lines). We may think of the meta-mathematical level as the thinking and speaking actions, while the objective mathematical level as the writing and hand-handling actions too. The meta-level in mathematics corresponds in the computer science to the hardware, or to the operating system, while the objective level of the theory to the particular software programmed within the operating system. As the axiomatic system of natural numbers, when postulated in the new mathematics, may have  a maximum element, the new mathematics are sensitive in the relation of the maximum natural number available in the meta-mathematics of the formal language, and the maximum number available in the mathematics of the objective language. E.g. If the former is smaller than the latter we may have a perfectly plausible concept of  infinite of the objective natural numbers compared to the natural numbers available in Logic! Their relation also determines if we  need  axiom-schemes like that of  Induction in the natural numbers, or not. These concepts are also familiar in computer science and the theory of algorithms (computation) as the available memory, and time resources in the operating system, the ram memory , and the input data memory of a procedure. Proofs may be also  procedures. We may not only think of the maximum possible natural number in the objective language of the theory and the maximum possible natural number of the meta-language of the Logic of the axiomatic theory as unknown constants, but also as unknown variables, that furthermore may be linked by parallel processing algorithms if postulated so. The same with the maximum possible natural number in the observable layer and the hidden layer. This gives  the interpretations of the infinite as a constraint which requires a transcendence of the available space and time resources of the cognition system in representing and reasoning about the ontology, of another system. The various grades of the infinite, are interpreted as (possibly parallel)  processing complexity measures between these inaccessible to the cognitive system,  finite numbers of the ontological system . In the present approach for the concept of the infinite in the next axiomatic systems we assume that it is always possible with sufficient enhancement of the resources of the cognition system, to "turn" or "reveal" the infinite of the  ontology of the studied system in to finite, while still we describe a new concept of "operational infinite" that corresponds the the classical concept of infinite , and entirely within  finite systems.! The dynamic interpretation of infinite, as an algorithm that increases the finite and also as am ultraistic way of talking about the finite, has been used often in the past as a way to keep a distance of the material, or human action, ontology of the finite, especially when the thinker considers it undesirable or an obstruction  in his attempts to  think about the situation. But when the situation is tamed a different  finitary approach is obviously better.

Boldly speaking we consider  a mathematical system as operationally infinite, if the counting of its elements, with the maximum possible natural number of the meta-mathematical formal system (e.g. Logic) does not exhaust it. We must notice that both the meta-mathematical system, and the objective mathematical system, are assumed, strictly speaking, finite. This in the world of computers corresponds e.g. to that the size of the RAM memory, and the maximum acceptable run time complexity of a procedure, cannot count, or scan the disc or space-memory size of the data of the  system. In the human situation it corresponds to a situation where the cognitive powers of perception, and memory (of the individual or collective theory-maker), are inadequate for scanning or counting the elements of the objective entity to be studied. It may be considered a large gap, between, minds powers, and hand's operational powers. If on the other hand, The maximum possible natural number, of the formal meta-mathematics (e.g. Logic, or formal axiomatic system) are comparable or larger of the number of elements of the objective mathematical system, that is to study, them it is considered operationally finite.

Thus in this approach of the finite 1st layer mathematical universe, not only the continuum of geometric lines or real numbers is created entirely from the finite, but also the very concept of infinite is also created, within finite systems. The finite and the infinite turns out to be two types of interplay, of meta-mathematics with mathematics, or of Logic, and practice. It is the first time after the time of ancient Pythagoras, and ancient Euclid, that public mathematics can create in a rational clear , Logically complete  and practically sound way, within  finite systems, the concepts or "realities" of "continuum",  "infinite" and "irrational numbers". For the first time  irrational numbers do not really exist, the continuum is created entirely from the finite after a logical representation of the phenomenology of the senses, and the infinite is a type of interplay of the finite of Logic, with the finite of mathematical procedures.

At least for the new axiomatic systems of integers, and real numbers, we could chose equivalent  axioms that are  stated only for the hidden layer, while the observable phenomenological layer, could be simply defined over the hidden layer. This is because the interplay of the visible and the hidden layer is quite standard. We cannot say the same nevertheless, for geometry or sets. For example a line segment would appear in the phenomenological layer as line segment either if it was made my a square lattice of evenly spaced pixels in the computer screen, or by an unevenly spaces system of atoms, of say  glass in a palpable ruler of glass. Although each axiomatic system refers to two layers, we may define new ones, where for example the observable external layer of the second is the hidden layer of the first, according to the needs of logical arguments. E.g. in  the standard practice of computer images, a  marked point by the user in a  computer image has, observable size, chosen from a pallet by the user, it is  made from a connected set of software-resolution pixels, of the image as a bitmap, while each software-resolution pixels is in its turn a connected rectangular set of monitor-resolution pixels! Thus if the user shall mark a most dense rectangular set of points, then three resolutions shall participate in this situation.

Research ideas under development 


For the natural numbers the two layers resemble the discrimination of two categories of positive integer numbers represented in a computer language, the simple integers and the long integers. Or the integers available to the user of a software, and the integers available to the programmer of it, and to the operating system. The operations in the  small or interface layer may have results in the larger layer, which is the only closure property of the operations postulated. Nevertheless, a subset of the axioms of this system has also as model the transfinite numbers like, the ordinal natural numbers. The next axioms are crucial  modifications of the Peano's axioms of natural numbers. This system of numbers can also be formulated as two  Galois finite Fields, each endowed with a natural order, and with obvious modifications of the cyclic operations so that the circles open to linear segments, and the operations of the smaller field have output to the larger field.

00) New  axioms for (double layer) finite systems of natural numbers.


The Logic with the most standard techniques is reformulated but in a finite mode. This means that all entities like, terms, constants, variables ,  relations, operations etc are not only finite in their internal composition, but also all of them together are finite in cardinality (finite resources) , and the size is very critical, to what can proved or not. As Logic itself is the object of study here, it is required also a system of numbers as the above double layer system of natural numbers to count its objects. The relative size of the system of numbers used and the cardinality of the objects of Logic, is very important to questions like consistency, completeness, decidability  etc. As the Meta-Logic for this Logic can be assumed a finite Logic too, the size of the meta-Logic, and the acceptable length, and various types of complexity for the proofs of Meta-Logic, is critical about what is considered valid for Logic as object of study. Troubles like those introduced by Goedel, concerning, the definition of "Propositions" by some kind of "diagonal" argument of the Meta-Logic, are completely resolved, and controlled ( if they fall inside the system at all) by the relative balance of the complexity and resources of Meta-Logic, and natural number system used, and  Logic as the object of study, The reverse of all the negative and celebrated theorems in Logic (e.g. Goedels theorems) can be proved to hold too, with a right choice of the parameters of complexity and resources.

In the modern computer science it is discriminated the input-size complexity from, the RAM-size complexity, the run-time complexity, the code-length of the algorithm complexity etc. All of the above have applications not only in programming languages but in formal languages too (as programming languages may be considered specifications of formal languages). Although it is a celebrated theorem that there are problems  that do not admit algorithmic solution e.g. the labyrinth problem, the word problem (Post theorem ) etc , it is because the input complexity ("All possible algorithms") is too large for the complexity of only and the same algorithm e.g. the "all labyrinths"  is an unspecified complexity. Nevertheless if the input complexity is balanced to: "Labyrinths of upper bounded size, and the upper bound is not greater than so much", then it may very well exist one algorithms that solves the problem. In the same way many impossibility theorems in Logic simply disappear, and are valid in the converse way, if the size of the acceptable propositions is only finite and upper bounded by an appropriate number for the resources of the used system of natural numbers and Login in the meta-theory (context). Although it may seem, that it is created a circle-reference between , Logic and Numbers, it is all resolved by a recursive discrimination, of Logic, or Numbers as specific systems of a level, for the other systems, and not as one only, standalone entity .

For more sophisticated approaches of the new finite resources Logic, we may discriminate between two layers of propositions in the system, the explicit propositions (like the  interface accessible to a user in a software) and the hidden or implicit or  invisible propositions (like the inaccessible to the user Logic in a software, which is accessible though to the programmer. The human-machines interaction is abundant in such an experience. E.g. the courses HCI [human-computer-interaction], in computer science). Usually the former system has a  model and interpretation in the latter system. If the Logic as a system is also attached to a axiomatic system, then the hidden or implicit layer defines a categorical axiomatic system (with unique up to isomorphism finite model), while the explicit layer of propositions defines an abstract non-categorical axiomatic theory (with possibly many non-isomorphic finite models). 

01) New axioms for,  finite and bounded resources, systems of Logic


For the real numbers, the two layers resample the discrimination of two categories of  rational numbers: the single precision and double precision  numbers represented in a computer language. They have also a human visual discrimination, as the visible external or phenomenological layer represents the significant part of the (rational) quantity, while the invisible or hidden ontological, the higher, decimals produced by the operations and required for a stable definition of the phenomenological visible layer. Or the discrimination may not be relevant to the visual discrimination of the human eye when the applications are not geometric. In such cases it is relevant to the relative size of the accuracy level of measurements of observable physical quantities and the atomic structure  of the physical system. The only closure of the operations postulated is one that  the operations in numbers from the phenomenological layer may lead to results in the hidden ontological layer. Bounds similar to the overflows in the computer handle of the quantities are crucial for the definition of the system.  Again the hidden ontological layer is a finite system of rational numbers. Concepts like dense subsets, Borer sets, and other concepts of descriptive sets of real numbers  are all finite sets! Obviously any function or distribution  on them can be represented with a finite dimensional vector and even more with a finite list of rational numbers. The operations at each of the two layers are defined for each granulation pixel (bin) from the ordinary definitions of operations on numbers, and  the centers of the granulation pixels that are the canonical representatives. The coarse external layer defines an equivalence relation of rounding for the granulation pixels (bins) of the finer internal  layer.  This system of numbers could also be formulated as three  Galois finite Fields (over the same prime base) , each endowed with a natural order, and with obvious modifications of the cyclic operations so that the circles open to linear segments, and the operations of the smaller field have output to the larger field. The larger Galois field has invisible pixels as seem from the human eye and a standard distance, and corresponds to the non-accountable (still finite!) part of the calculations (what calculations although possible due to the ontology are not to be included in the description of the phenomenon) . If in addition its size is a significant larger number, from the maximum possible natural number of the formal Logical axiomatic system for these real numbers, then also the invisible part of the real numbers (largest Galois field) is also operationally infinite, as defined previously (although still strictly speaking finite!), otherwise even the non-countable hidden part,  is operationally finite. The middle size Galois field defines the boundary of the phenomenological to the invisible ontology, and mainly corresponds to the human visual discrimination threshold according to the set conditions. It has the visible of finite size points of a line, and represents the part of the countable part of the calculations that is significant in the final representation of a phenomenon.. The smaller Galois field corresponds to the natural numbers within the real numbers. In this setting natural numbers, rational numbers and real numbers do not really have any essential difference. The fact that Galois fields are always powers of prime numbers is convenient, as also in Multi-Resolution wavelet Analysis, the sizes of the pixels of the different resolutions are chosen as a sequence of powers of a prime number. Multi-Resolution wavelet Analysis has remarkable applications in the efficient representation of the continuum of digital images, and sound. The choice that the pixel sizes in different resolutions increase as a powers of a prime, is in accordance with the law of Fechner in psychology and physiology of the senses which states that if the  input of the senses is in multiplicative progression, the bio-ware representation is in additive progression. This is an economy of nature and our condition as human beings, which defines, too, the type of our cognition . The same principle is met also in music and design of the scales but also in the decimal base of the  system of numbers , where the measuring units are in multiplicative progression (powers of 10). The characteristic of the larger Galois field defines also the resolution of the number system. Instead of Galois fields we may also use, simply, finite rings modulo a power of 10. We reserve the symbols Rm,n  for such finite systems of real numbers , where m is the power of base 10 till the smallest visible point or mark (or cell) and n the additional orders of 10 till the smallest indivisible of the resolution or invisible point. We assume symmetry in the powers of 10 for larger to 1 sizes. The n represents in other words the depth of the continuum. For example we can have as reference system, the R4,4  where there are 4 orders of 10 (decimal digits) of invisible or non-observable or non-significant in measurement sizes after the last significant decimal digit, which is at the forth place after the point. (It is instructive to compare it with a corresponding term and infinite aleph in the infinite real numbers . As it is known through the forcing method of Cohen it was resolved that the continuum hypothesis, and also that infinite real numbers can have any depth. This is one more case in my way of thinking where classical mathematics of the infinite prove to be too a kind of early encryption of facts of the finite universe of real numbers) We must remark that the ancient Greek word "άρρητος αριθμός" was translated in Europe as "Irrational  number" but a translation closer to truth is "classified number" . A classified number does not have to be something different from a finite number. What is classified in this system of numbers is what is the size of the invisible Galois Field, in other words the resolution. That it was classified, was an element of abstractness or transcendence from the material of the objects of application. At that ancient age (the time of Pythagoras) it was not widely know that material objects are made of indivisible atoms, therefore any one believing on this should consider such an assumption in the mathematical ontology too, as classified or άρρητο . The analogue concept of the άρρητος αριθμός as depth of the resolution, in physics and chemistry is the Avogadro's number 6.022*10^23 , that essentially defined the physical "resolution" the tangible continuum (e.g. gaseous) matter that was familiar to them in the experiments. The first measurement of this number which was essentially the universally accepted birth of the atomic physics, was by Loschmidt and was based on the formula of sample variance in statistics of Brownian motion, which is not invariant to the sample size (of particles) , thus giving the necessary clue. It is therefore obvious that the finite real numbers have to introduce, besides the standard of unit of 1 meter, also a new standard for the resolution. For the natural sciences we may use as ratio of the unit to the invisible pixel of the invisible layer the number 10^(-12) or 12 decimal digits, as this is close the the electron Compton wavelength, and may cover  details up to the diameter of an electron. For the visual applications on a computer screen the value 10^(-5) seems adequate. Since now we are in the new 3rd millennium, and the atomic constitution of material objects is a common knowledge in the civilization, we may simply put a variable x for the size of the invisible Galois field or resolution, if we intent other applications of coarser nature than physics of the atomic matter, or finer nature as future physics of the fields. This is an adequate abstractness or transcendence from the particular nature of the material objects, that measurements  are to apply  Nevertheless, although  the resolution may be a "classified number " (άρρητος)  it is still a fixed rational finite number, during the mathematical arguments, and if necessary we may and must use this property. This finite integer number measures the depth of the continuum of the particular real number system. Thus making a little trick of words we may finally speak of the accountable real numbers that are finite (in contrast to the uncountable real number of 19th century that are infinite). Accountability as everybody knows gives a superior control on the scientific measurements and operations, therefore a real spiritual and practical action advantage to the civilization. In addition this spiritual advantage is within he measure of the human mind and human conditions avoiding, as ancient Greek called  "hybris" (=insult) towards "gods"

The next axioms are slight but crucial and critical modifications of the usual complete ordered commutative field 's axioms of real numbers. 

02) New axioms for  (double layer) finite system of real numbers.


For the sets of set theory, the two layers (the external or sets and the internal or classes) resample the discrimination in to classes and sets in the Bernays-Goedel axiomatic system of Cantorian set theory. The axioms are slight but crucial modifications of the Bernays-Goedel axioms, and as I pointed out previously they do have many simple, finite models for them! The set operations in the  small or interface layer may have results in the larger layer, which is the only closure property of the set operations postulated. If Cantor had studied subsets of such finite systems of real numbers as the above, he would not have to present his famous set theory, but he would have resulted in a set theory as the one below, where all sets are finite. In addition Goedel would need not introduce the ideas of Cantorian infinite in metamathematics too! And quite probably, to my instinct, Cantor and Goedel too, might not have resulted in the mental sanatorium in their late years, in such an unfortunate way. I consider, the present suggested developments or updates in the science of mathematics, an a significant cure of the incompatibilities of the long range and beyond, collective mind in the sciences. From the computer science point of view, they resemble, say, in a data-base software system,  the external observable layer (sets) with the tables and queries, that a user can perform in the data-base, while the hidden layer (classes) with the tables, and queries, that a programmer with SQL-statements can perform in the data base! Furthermore it could be compared with the modern theory of "objects" of object-oriented programming languages , with their inheritance (belonging) , classes  and other relations.

There is a closest concept to that of Cantorian (or should we say Zermelo-Frankel’s)  infinite in this finite world of finite sets. And this concept is the protocol of collective , daily updated maximal finite sets. E.g. for the real numbers we may imagine the densest or highest resolution finite system of real numbers, based on any real number defined by any creative worker in mathematics in this planet from an initial day till yesterday. Such totalitarian large and socially agreed finite sets A (till last update of date n) , have the property that for any element a in A , the {a} U a belongs in A not today but not earlier than the next morning. As we can easily see such a concept goes beyond the traditional initial concepts of set theory, as it involves a collective agreement and protocol in the social scientific and mathematical communities, and time unit, concepts like planetary days. It is a concept  much like e.g.  weather or other social news and economic data that are daily updated in the data bases of the Internet, in a globalize civilization. The Internet is both a democratized and also globalizes. It is readily realized though that such maximal till yesterday sets (e.g. of real numbers) are very clumsy totalitarian entities to be used in the mathematical arguments (arguments that might be so that hold for any future update, and fore any refinement of the resolution), while smart and elegant arguments would require only “democratic” finite versions, till a fixed resolution of the real numbers.

A very important feature of this set theory (which also resolves part b) of the initial remarks of this page, as motivations for this work) is that it is not a new gap in abstraction, and does not really introduces new ontology that numbers and logic cannot derive! This is achieved by defining axiomatically all sets as finite sets of numbers, and derived only through the means of formal logic of order n (n-order formal languages and Logic as described for example initially by the theory of types and  predicates of Russell or even simpler by Hilbert etc) when included in the objective level of the theory. Thus we apply here a simple philosophical equation Sets=Logic + Numbers. This restores the property that all mathematical entities are arithmogenic (=generated by numbers) and logical operations! After all, the initial concept of belonging which is denoted by  €  is also the first letter of the word "epomenos" that in ancient and modern Greek means "next", and the only difference of the initial concept of belonging  € in sets from the initial concept of   next € in natural numbers, is that in the latter every entity can have at most one next and one previous, while in the former, it can have many previous, which leads naturally in to tree structures. Tree structure is the basic pattern  nevertheless of the types of formal propositions and predicates in Logic. This assumption in the present set theory, is in conformance with natural sciences, (where from atoms are made not only the lattice structures of metals, but also the tree-like structures of chemical compounds etc) and also with computer science, where all data-base tables are made after all from sequences of bits. This keeps the continuity in the genealogy of the ontology of all mathematical entities. Thus from this point of view the hidden layer is the numbers from which sets are made , and the external observable layer may be the Logical structure in defining  structures upon them. The external layer is also only a  part of the n-logical layers of the n-order formal Logic. As part of the Logic is already in the objective language in this way, the meta-level for this theory introduces further levels of Logic that are not included in the objects for study of this theory, and for many purposes it can be kept as only 1st order logic. Thus we strictly separate  the axioms, that refer to the numbers, from the axioms that refer, to the higher order predicates and relations over numbers. Axioms like that of replacement or comprehension are already part of the facts of Logic rather and lead again in to entities of logical character. Notice also that as we make use of formal language of order up-to n, the "sets" of numbers have "height" or "deepness" only up to n . But furthermore and in a similar way the "width" of any set or tree of   €, is at most an other integer number m, which is not revealed in the axioms as particular sequence of decimal digits, but is stated as existing for any set. This says a lot  more than what says the usual  well-foundation axiom, and thus permits many more wonderful structures and ways of reasoning and proving in this new set theory!  This should be so as in the Layer 1 of the 7-layers of mathematics, the sets, are not only of finite many elements but also a finite system of sets. This is a basic requirement for all of mathematics when existing in the Layer 1! The axiom of infinity, has a different meaning in this setting, and refers to the closure of sets of the external , observable layer to some operations within the larger hidden layer. There is always again the issue if the external layer, is finite or infinite compared to the available natural numbers, of the meta-language for this theory. But this point has already been met, in the discussion of the new axiomatic system of natural numbers and is resolved and chosen in the same way. The probable ideas of the ancient Pythagorean philosophers, that partially ordered entities (figured numbers) are the source of everything in maths, has a proof here as we re-create all of the mathematics at Layer 1, from such entities. My preference here is to define the whole of the maths of Layer 1, from the concept of tree-action, which is nothing more than the old concept of program or algorithm , described as a flow-chart, which when opened is a Tree, and over one elementary operation which is the unit-counting in natural numbers, and elementary decisions, that are identification or comparison of such a counting. It is quite spectacular that we can prove that :

1) Tree-actions (a modern correspondent to the ancient Pythagorean figured numbers)

2) Markov Normal Algorithms

3) Turing Machines

4) Recursive functions

5) Free-ring polynomials (algebra)

6) Finite sets of finite sets etc. of natural numbers (Cantor, Peano)

7) Finite, logical types (Russel)

are entities  essentially equivalent, as each one can be transformed in the other, and thus contain equivalent information. Their systems, as finite systems are  isomorphic up-to defined relations and operations.

A tree-action is a geometric entity together with an action or flow on it defined axiomatically. It is similar in some sense to the ancient Pythagorean figured numbers.  It corresponds here to an algorithm not upon the strings of a formal language but, upon unit numeric counting, and deciding about the equality of natural  numbers. That any number is created by repetitive counting of the unit is of course what we are familiar with. But  the counting that creates the natural numbers is sequential. Here we introduce also the concurrent counting. Sequential counting creates the branches of the tree, and are the natural numbers (height of the tree). But the concurrent counting creates new entities that are not the natural number, and are the branching of the node of the tree to many parallel branches (width of the tree). If it was musical action, sequential counting would correspond to a melody, while concurrent counting to a chord. We introduce a symbolic writing of the tree action , where 1 is the unit counting, o is the sequential composition (repetition) of unit counting, and U (union) is the concurrent repetition of unit counting. Thus while 1o1=2 , 1U1 is not the number 2. We introduce also symbols and operation for elementary logical decisions that are necessary in any algorithm, and we put them as exponents or operators of a 3rd external operation.

These two operations make a tree action in to a free-ring polynomial with associativety, distributivity etc. While union or concurrent composition is commuttative, sequential is not. We notice that the the o corresponds to the usual addition, of numbers, The usual multiplication of numbers is defined from the addition. Notice nevertheless that the union is an operation where the addition of natural numbers is distributive! Notice also that the sequential composition o is a wider operation than addition of numbers as it applies to pairs of tree-actions, which makes it in general non-commutative. Natural numbers are only a special category of tree-actions, where the tree is only one branch.

The free-ring polynomials is a very elegant and simple way to write in  lines, all the information, of a program, and its flow-chart either as a graph or as a tree, and either if it is sequential or of parallel computation. We enhance the entities of such free-ring polynomials (thus of non-commutative multiplication)  with exponents (e.g. from a finite Boolean algebra of events) corresponding to the elementary decision that has to be taken before we execute the next commands (the base of the power). The multiplication of such polynomials is the sequential composition (or call) of commands, and the addition (or union) the parallel or concurrent execution of two commands. It is remarkable that the elements of a finite set as in 5) are interpretable as concurrent (parallel run) of commands. Thus the above equivalence of the basic entities of Logic, Sets, Algebra, and Computer programs, proves that at Layer 1 it is the same from the technical point of view and in some sense conceptually too, if we base Mathematics on Logic, or on Numbers, or on Geometry, or on Algebra (operationalism), or on Computer Science!

03) New axioms for (double layer)  systems of  sets.


For the Euclidean geometry the two layers correspond on the visible phenomenological  interface , e.g. of a finite strait line segment on the computer screen, while the larger ontological layer corresponds to the invisible programmers pixels of the the lines over some finite resolution. Even in the computer image processing, there is the discrimination between the hardware's screen or monitor resolution, and the software resolution of the image as bitmap. In the case of the computer of course, according to the user's choices  maybe only the latter is visible, or none or both. Usually, the monitor's resolution is invisible while the software defined resolution of the image as bitmap can be visible. In addition the user's marks or points are still of different size, (always made from a software resolution connected set of points or pixels) and their visible size, as corresponding to the thickness of the marking tool (pencil etc) cab be chosen from a pallet! Thus the points are of at least these two types and so are the lines, planes etc. A phenomenological unique point may contain many ontological pixel points! In this geometry the concept of accuracy level is very important. The identity of entities in the layers is defined according to he accuracy level. As in the ancient mode in Euclid, the lines are always bounded, strait linear segments, and the corresponding system of natural or real numbers, must be again double layer and compatible , with the referred geometric system. Two lines intersecting in the phenomenological layer may not intersect in the ontological pixels layer! This geometry compared to that of Euclid is simply a modern update after the knowledge in physics that material objects have atomic structure, an update appropriate for the new millenium. The usual equations of geometry (e.g. the Pythagorean theorem), are exact  on the phenomenological-interface layer, in the sense  only that they are  rounded relations on the ontological layer. So e.g. the incommensurate  linear segments or quantities in the 10nth book of Euclid, take a totally different meaning! . In the ontological layer, they  have always a common measure, and are measured with rational numbers! E.g. This is so for an isosceles orthogonal triangle of equal sides equal to one and hypotenuses equal to square root of two!. On the other hand, at least, it is impossible to prove in such a system, that they are incommensurate in the phenomenological layer! The next is a counter example for such a proof: For example the minimum visible length (threshold)  in the phenomenological layer may be smaller than the a chosen common unit and sub multiple of each of the equal sides, and larger than the remainder of the division of the hypotenuses with this chosen unit. In that particular case the unit is non-zero in the phenomenological layer while, the deviation from the exact Pythagorean relation is below the threshold of the phenomenological layer, thus the deviation is zero and the Pythagorean theorem holds in the phenomenological observable layer with rational terms as equality (in the phenomenological layer the equality or congruence is a rounded equivalence relation of the hidden ontological layer. We must not forget also that for the congruencies of the observable layer of geometry is used the rounded equivalence relation of an  observable layer of real numbers, and for the hidden layer the corresponding hidden layer of the real numbers). This means that in this case the equal sides and the hypotenuse of the orthogonal isosceles triangle are indeed commensurate with an observable common unit, in their double layer finite resolution geometry and in both layers, the observable and the hidden, they are commensurate!  The next axioms are crucial modifications of the Euclid and Hilbert's axioms of Euclidean geometry that are deducible from a definition of the geometry as 3-fold coordinates of double layer real numbers (Cartesian or vector or analytic geometry approach). The Archimedean  maximallity of Hilbert as definition of continuity, has a nice obvious and robust substitute based on a property of the double layer finite resolution of any geometric object. The hidden and invisible in pixels, layer may seem redundant to some at the beginning, but is very important as distinct layer, to the observable interface.

The necessity of the discrimination between a phenomenological and ontological layer, is more plausible if it is identified through the discrimination threshold of the average human visual ability at a standard distance, thus as  the visible points layer, and the invisible points layer.

 It becomes even more important, in the theory of curves and surfaces, as the infinitesimal or tangent space at any observable point is exactly the partition or equivalence class of the observable point over the non-observable points of the hidden layer.  The whole setting is by far non-equivalent to that of the old mathematics of differential geometry, as continuity and differentiability as we remarked may or may not hold and may switch in holding over the same surface and at different resolutions or space-scales. So, many different systems of differential equations may be written for the same set of points or manifold, at different resolutions, representing different geometric properties of the same entity at different resolutions! Much of the work of digitalized  image processing in computer science is of great help, for the appropriate concepts in this new geometry. An other spectacular deviation from the ancient Euclidean geometry is e.g. the definition of the ratio of the length of the circumference of a circle to its diameter. This number that in classical mathematics is the irrational  number pi (π), is here a rational number. The answer to the question which rational number is this ratio , is that it is a different rational number for geometries of different resolution. Thus for  circles of the same diameter this number is different. E.g. if  we are talking for a circle from material copper wire,  the size of the copper atom defines this number. If we are talking for a circle drawn by say Archimedes on the sand, then the average size of the granule of the sand defines this number, and if we are talking for a circle on the screen of a computer, the size of pixels of the screen and bitmap image resolutions, defines this number. In this 1st-layer geometry endowed with a resolution specification, the "irrational" pi (π) as an algorithm of increasing the digits of 3.14, is meaningless as a non-terminable algorithm, and meaningful only as a numeric, rational number, final output.  We must not forget that the concepts of infinite and irrationality (of  numbers) refer and are properties not so much of  the ontology of the entities of study but rather of the states of consciousness (individual and social or civilization collective) of the subjects that study and make the knowledge. It is the evolution of culture and societies, that a century   may come that this may substituted by the  finite and rational, in a glorius new setting.

04) New axioms for (double layer) entities of  finite Euclidean geometry.


To clarify how  powerful are these new lines of reasoning, observing and handling of the mathematical entities, in the layer 1 of the 7-layers, we may present a direct proof of the famous and  still unsolved (as far as I new) Poincare conjecture in the topology of three-dimensional manifolds (actually what has survived after the refutation of an important part of it). The topology on systems over finite resolutions is based on the initial concept of two observable points in contact, or on the concept of topological closure (or 1st-order continuation) of a set (all points in contact with the points of the set). As the observable points are cell of the resolution lattice, being in contact or being in zero distance (over the observable number system) means the obvious, either being identical cells or having a point or side in common. This topological closure operator is not like the closure operator in classical topology, and actually it is not even a closure operator as defined in algebra. The difference is that the closure of the closure maybe a strictly larger set, and we may define the n-order closure of a set.  There is no valuable distinction in open and closed sets, as in the topologies of interest all  can be both. The information required for the topological arguments is defined by the set of all 1st-order closures (or continuations) of any set, or equivalently by the binary relation on all observable points of  being in contact or not. There is though the definition of interior point of a set. The proof can  by enhanced with induction on the number of points for any finite model of the geometry! (Here we take all  models of the geometry that have finite many points, on both layers and even the concept of finite is ramified to observable finite or not). The completeness of 1st order logic, the adequacy and completeness of these axiomatic systems as stated in 2) and the fact that all axioms are within 1st order logic, give the validity of the proposition for the axiomatic system. If we do not want to make use of a completeness as in 2) but a completeness as in 3), that for sure does not require axiom-schemes in the new mathematics, then we must formulate, the new axiomatic systems appropriately for this and we get a different proof for each one categorical such system which is as we remarked a decidable system.. The previous proof shows how indeed the "naive" belief of Hilbert, that  practically most of the  mathematical problems are solvable, as integration of his formalism program, and after they have been formulated, is indeed almost so, through the decidability property, but only at the appropriate layer and with the appropriate assumptions about the available resources in logic and numbers.

In the same way we may as well obtain a relatively simple proof of the Riemann Hypothesis, of the zeros of the zeta function, over the (finite resolution) complex numbers, which cab be also essentially the proof of E. Artin , for the Riemann hypothesis about the zeta function over abstract fields. Nevertheless the finite resolution complex numbers are not literally a field.

The above approach shows a different mentality and philosophy, about many unsolved  problems in mathematics, which may also prove that such problems are essentially meaningless to try to prove in the old setting of infinite mathematics (layer 7), but naturally solved in the practical and finite approach (layer 1) of mathematics

05) A direct proof of the Poincare Conjecture in the new double layer and finite resolution geometry for small depth resolutions.

06) A  direct proof of the Riemann hypothesis  in the new double layer and finite resolution complex numbers for small depth resolutions.

07) A direct proof of the Goldbach  hypothesis, in (finite) systems of natural numbers for small size of maximum number ω.

 A tour in Mathematics as seen at the Layer 1, of mathematics without the infinite

Let us make a tour in classical Historical mathematics (of layer 7) as they could be re-written at layer 1, after founding them without the concept of infinite. With new techniques of visible and invisible elements and resolution, the infinite disappears. Many unsolvable problems become solvable. The feeling of such mathematics is more positive, and optimistic. Some of the known complexities in concepts and cases completely disappear too, but other complexities of revealing importance appear. Let us not forget, that neither Euclid neither Pythagoras in ancient Greek mathematics, used infinite, in a literal way  as we know it. Let us not forget too that Newton first and then  Leibniz  invented the infinitesimal calculus as a technique of symbolic calculation, to avoid massive numerical calculations that were not possible at that time without computers. Unfortunately academic mimic of their techniques let to new complications of modern mathematics, by far more complicated and intractable than any modern numerical massive calculation, (of a finitary version of them), so that the initial advantage of fast results, after 3 centuries became a well developed disadvantage of inapplicable mathematics. We must be warned too, that although the world of mathematics without the infinite is a better "paradise" than the "paradise of infinite" of G. Cantor, (for the latter it  may be said that has already turn to a "Cantor's Hell"), the world of mathematics without the infinite  shall have its difficulties too. One not easily suspected difficulty is that as the ontology of such mathematics is closer to the real structure of nature and reality, (the metaphysics of infinite, seemingly disappear), the creative process and projection in the human consciousness requires great care, so as to avoid , not mathematical but emotional traps. We must mention that the development of the mathematics without the infinite is already again, a necessity, as the computers, digital sound and images, have already made it practice, without the appropriate "manual" or clear conceptual cover for this. Therefore if we want not only first to act, and then to think, but first to think and then to act, we must develop the mathematics without the infinite. In addition this does not mean that only people using heavily the computers can appreciate them. In the contrary, it is developed so as to be carried out on the paper too, thus for even larger social realms. Furthermore a better, with human measure, link  of the thoughts, of the words mouth, and the works of the hand,  is created, and this means civilization with excellence and perfection.

I take as an example the three volumes book with title MATHEMATICS , Its Content, Methods, and Meaning, edited by A.D. Aleksabdrov, A.N. Kolmogorov, M.A. Lavrent'ev and translated in English by K. Hirsh. The 3-volumes collective work, has been published by The MIT Press , Copyright 1963 American Mathematical Society. We shall make a tour among the chapters of the book, and make some sketchy remarks of how the new finite foundations affect, improve , simplify, or make more sophisticated the various areas of mathematics.

Chapter I General view of Mathematics

No comments

Chapter II Analysis

The infinitesimals dx as discussed above, are simply rational numbers in decimal form, but below the accuracy level (referred as resolution)

of the implementation or instance of Analysis (e.g. 0.00001000). There is no need of limits, that would correspond in the present approach to a sensitivity analysis among different resolutions. All become simple , comprehensible, transparent, as far as a single invisible  resolution is concerned. It is a perfect restoration of ideas of Newton, Leibniz etc Of course such an analysis is defined over finite system of real numbers as above.

The integral is also unique (Cauchy, Riemann, Lebesque, Daniel-scheme etc are all become the same integral). According to the depth of the continuum and the resolution, the definite integrals exist and are finite sums provided the integration limits are apart by a threshold distance.

Nevertheless a multi-resolution analysis, is by far more comprehensive and sophisticated, as a function, may be differentiable in up to a resolution, but non-differentiable in finer resolutions, or vice versa, The same applies to continuity. Functions are always defined both on the visible layer, and a 1st invisible layer (finite resolution) , and two can be equal at the visible layer but not equal at the invisible. Going to finer resolutions require always an extension of the function, as a functions for analysis, continuity, differentiability, topology etc are definable only till a 1st invisible resolution.

Chapter III Analytic Geometry

The marvelous idea of coordinates of Desquartes, may be considered a forerunner of the present digitalization of images, and sound in computer software. The methods of Analytic geometry become even more transparent and effective, as now the visible and invisible points of a line segment are finite and so are the system of numbers used, as  the coordinates. So it is required a double correspondence of numbers to points: at the visible layer and at the invisible layer. Their relations is the depth of the resolution. For more details see above the remarks in the paragraph, about double layer Euclidean geometry, without the concept of infinite. Only one invisible resolution is required, even if  differentiable curves are  included in classical Analytic geometry. In other words the pixels of invisible points (that constitute a permanent invisible simplicilization of space, of a cubic or parallelogram lattice form) , and the grid of visible points. The accuracy threshold level, here, has to be identified as a fixed visual discrimination level, at a fixed standard distance from the geometric figures. An other interesting situation is with the concepts of length, area, and volume. The Hilbert's 3rd problem holds with a converse way in the this realm of mathematics: Any two polyhedral that are of equal volume are also equidecomposable. The tools of Dehn invariant and of equal volume but non-equidecomposble  polyhedra of infinite Euclidean space are of null interpretation in the finite resolution Euclidean geometry. This is not strange as difference between the Euclidean geometry based on the infinite Cantorian sets, and Euclidean geometry of finite resolution. As we mentioned above in this page, the "surgery" of taking one  Euclidean ball, and by the axiom of choice cutting it in to  finite set of pieces and then reassembling them to give two (!) balls, although it exists in the Euclidean geometry of infinite Cantorian sets, it does not exists in the finite resolution Euclidean geometry. It exist only as giving two balls but of lower or coarser resolution to the resolution of the initial ball. Conversely there are phenomena of perfect symmetry in the Euclidean geometry of finite resolution that do not exist in the Euclidean geometry of infinite Cantorian sets. We already mentioned e.g. the case that orthogonal triangles have always sides that are rational numbers, and the Pythagorean theorem holds exactly too. An other example is that we can tessellate a spherical surface with a large number of congruent spherical squares. (or a large number of congruent spherical equilateral triangles). (To see how this is possible, take e.g. the projection of an inscribed cube on the spherical surface to give an initial tessellation of 6 congruent spherical    squares. This is the best you can have in the Euclidean geometry of infinite Cantorian sets. But in the Euclidean geometry of finite resolution we can have massive number of perfects spherical tessellations. We divide e.g. the sides of the inscribed cube with a sufficient large number of squares, so that when projected, on to the spherical surface, their difference. is below the discriminating threshold of the congruence up-to-the space resolution. Therefore they are all perfectly congruent in the space's resolution although there is of course a finer than the initial resolution of the space, in which they are not congruent.). In general in the Euclidean geometry of finite resolution the conceptual order and perfection is higher, while many of the logical difficulties of the Euclidean geometry of infinite Cantorian sets, seem as if of tricks to impress and make people spent time as academic researchers in to matters that are not of real value in applications, or  in the physical ontology of the world.

Chapter IV Theory of Algebraic Equations

Here we have many changes. The methods of solving equations , with symbolic calculations, closed formulae, radicals, four operations etc, loose a lot of their significance and interest, after realizing, that :

Because all real numbers are a finite system of rational numbers, represented in decimal form, and an equivalence relation, up to an accuracy level, even the perfect algebraic solutions, have to be exact rational numbers, up to some decimal. Therefore, the computer algorithms to solve an arbitrary equation, in most cases are not worse methods or approximation methods but exact methods.

Nevertheless if we change the reference resolution we get different solutions, which brings a whole new realm of new sophistication in the domain.

 Of course the classical techniques of symbolic calculations to solve particular types of them are not lost.

The celebrated theories of Galois and others, the theorem of Abel on the non possibility of solution of 5th order polynomial equations with radicals etc turn out to be much trouble just for the sake of restricting the methods to find solutions.

In the present universe of mathematics where the Cantor’s infinite does not exist, it is possible to have simple and powerful results:

For every polynomial equation of any degree (2nd , 3rd, 4rth, 5th ,nth etc) , there is an algorithm that factors it in to first order and second order polynomials, with real coefficients, thus also an algorithm to find all its complex and real roots. Furthermore the algorithm can be chosen so that the only operations used are addition subtraction, multiplication, division, and integer power!

Chapter V Ordinary Differential equations

All differential equations , are solvable with almost invariably the same algorithm, up to a resolution. All functions are finite many in this universe of mathematics. Of course the classical techniques of symbolic calculations to solve particular types of them are not lost. In the present new universe of mathematics without the infinite, the previous mentioned algorithm has complexity depending on the depth of the resolution and can be therefore much too high. Algorithms of radical less complexity can solve any first-order differential equation , and in general any differential equation , that can be solved in one side with the values variable and in the second side all other variables, can certainly be solved, as it  is essentially a difference equation , with the additional specification of the equivalence relation of accuracy level, which simplifies a lot all recursive calculations (thus simpler both from a differential equation in classical mathematics of the infinite that requires convergence issues, and simpler than difference equations in classical mathematics, as the equivalence relation of accuracy level of the finite number system, saves redundant calculations)

No need of limits or speeds of convergence etc, We do not talk about "approximation" as the mathematical ontology, and definition of functions etc are always up-to-a resolution therefore all solutions are exact . Thus there is no need to have two courses, one  "differential equations" and a second "numerical analysis" to practice the solutions with computers. The very first course is the same time the second.

Various physical, or social, or financial phenomena, have systems of differential equations, up-to-a resolution. Nevertheless of we change the resolution, and accept smaller and larger numbers relative to the unit, and different levels of significant level of accuracy, he we may have to add some new equations to handle new areas of quantities and mutual relations. In addition the same system of equations, although with a unique solution in the classical sense , may have different solutions, at different resolutions!

This formulates for the first time as far as I know, the old idea that "causality is a block of flats", or a many organization layers hierarchical system of contingencies, in a purely deterministic way. A complicated phenomenon in many levels of time, space,  material realm or other informational realm, might be formulated as a function defined in many resolutions, by differential equations that are supposed to describe its "causal law". It might be thought that the unique necessary causality is the form of the differential equations at the finest resolution . But it seems that the truth is, that there are separate causalities , with different form of differential equations (describing the causality in two consecutive resolutions) at each resolution. The causality at the finest resolution is significant only at the finest resolution. So although the whole phenomenon may function as a totality, the causalities are many and different at each resolution. An example of this approach is the many different explanations, of the same e.g. social events, that different groups of people give (e.g. sociologist,  politicians, psychologists, religious people, astrologers, economists, biologists etc) Very often they reflect contingencies in different layers of organization of society and the world, relevant to the events.

Of course even if we restrict to a single only invisible layer, which I do recommend, we get the usual formulations of the "causality" of phenomena in the form of familiar differential equations, with the advantage that they have a more clear-cut practical numerical interpretation, without limits, easier to solve and to teach in students.

If in the deterministic mode the idea of hierarchical specification of the causality may seem surprising, at the non-determinist stochastic mode it is certainly already a  quite familiar technique, as we can for example see in the discipline of Hierarchical Linear Models (HLM) of  time series and stochastic processes.

Chapter VI Partial  Differential equations

The same remarks made above for ordinary differential equations apply for partial differential equations too.

Chapter VII  Curves and surfaces

The remarks made above for Analysis, and analytic geometry apply in combination here.

Again we must remark that although the classical calculus, and differential geometry of curves and surfaces, at single resolutions is a lot simpler and more realistic than the classical approach, we have now a new source of complexity and sophistication: The Curves and Surfaces, if defined simultaneously in many resolutions, may possess different differential (or topological) structure at different resolutions! But for repeating all the good results of differential geometry only one  invisible resolution is adequate. The "tangent space" is literally the geometry by the  invisible pixels, inside a visible pixel, and parallel connections, curvature, metrics etc are definable, without limits, in a transparent charming, clear way, that even an a accountant that does not use variables can understand!

Chapter VIII  The Calculus of Variations

The remarks made above for Analysis, apply here too.

Of course the classical techniques of symbolic calculations to solve particular types of variations problems  are not lost. But any function is now equal to another up to an accuracy level, (or up-to a resolution) thus numerical techniques are simpler uniformly the same for different problems, and thus more effective.

Chapter IX  Functions of a Complex Variable

This traditional beautiful subject does not loose its beauty, but becomes even more beautiful and transparent.

The same remarks made above for Analysis, apply here. All series of analytic functions are up-to-a resolution, therefore they have a finite number of terms. The concept of conformal or analytic function is only up to a resolution. If they are defined simultaneously in many resolutions, they may possess different differential (or topological) structure at different resolutions! Thus the analytic properties of complex functions become layered. This is source of new sophistication (the cost of eliminating infinite), but a single resolution the whole theory is simpler, more realistic, and more beautiful. For small depth resolutions, a computer, or smart proofs, can easily answer old celebrated problems, like Riemman's hypothesis etc

Chapter X  Prime Numbers

Almost nothing changes here. I should remark though, that even the system of natural numbers is only up to a maximum number ω (omega).

This number may be unknown or hidden, or explicit. Nevertheless it exists, and affects, not only the ontology of the theory, but also the arguments. E.g. We may have simple proof of the Goldbach  hypothesis, or Fermat's theorem, for all numbers (of course up to ω), if we assume a particular form of ω, based on its prime number decomposition, or number of its decimal digits.

The classical proof that the square root of 2 is an irrational number cannot exist in the present approach, as the equality of natural numbers is not to me confused with the equality of rational (real)  numbers up-to-a resolution.  The square root of 2 is easily proved to be a rational number in a particular resolution!

Chapter XI  The Theory of Probability

This very important subject changes in the same way as the Analysis. All distributions are up to a resolution. All the moments of distribution are finite many. The characteristic function of distribution is a series of finite many terms. The continuous random variables are those that the distance of two possible values  may be less than the visual threshold! Again all become simpler and transparent at a single resolution. But we may have also multi-resolution probability and statistics.

Various physical, or social, or financial phenomena, have systems of stochastic equations, up-to-a resolution. Nevertheless of we change the resolution, and accept smaller and larger numbers or probabilities relative to the unit, he we may have to add some new equations to handle new areas of quantities and mutual relations. In addition the same system of equations, although with a unique solution in the classical sense , may have different solutions, at different resolutions!

In particular the paradoxes of geometric probability (Bertrand's, Buffon's needle etc) are better understood why they are met, after the specification of resolution in geometry and resolution in the quantities of  probabilities. The probability sample spaces are finite, and all the paradoxes are resolved in to crystal-clear terms in a unique way, that most would find almost obvious due to new details of the geometric ontology that were not existing in the classical geometry of infinite point sets.

Although the next remark is not directed related with the changes that the ontology of finite resolution makes in maths, it is of significance to mention if we want to avoid  tactics that breed intentions to almost lie with sophisticated scientific way. The standard way that statisticians or applied scientists "fit" or estimate a stochastic processes or time series over a one-element sample of paths, (a single only observed path) must be definitely be avoided! We can "fit" in this way plenty many radical different stochastic processes with high degree of classical "goodness of fit" measures and practically claim all different and opposite assertions ! Statistics requires repetition, and large samples, and this applies in the case of stochastic processes to paths and not points. So at best, a way to cut and make many elements sample of paths, is required when only only path is observed!

 Remark about stochastic differentiation and stochastic integration.

Many  interesting changes occur at the theory of continuous time stochastic processes. E.g. the ITO's stochastic calculus is entirely simplified as the stochastic Integral is after all a finite sum of random variables. The role of accuracy level threshold, is the key to simplify all the troubles of the stochastic convergence, and different types of stochastic limits in defining the derivative and the Integral. The new stochastic calculus based on a resolution, is not only mathematically more robust but also easily comprehensible. On the other hand other types of continuous time stochastic calculi, like this used in the signal processing, sound and image filters etc is also entirely simplified and in complete correspondence to actual implementation with computer software, without the traditional concept limits and of "approximation". Furthermore the continuous time stochastic processes used in quantum mechanics, became also better understood. The latter stochastic processes are Markovian and are usually described, rather, by higher order partial differential equations that rule the time evolution of the probability distribution,  than direct equations of the state random variables. An interesting remark is that, if such PDE of the probability densities are higher than first order, and as any higher order Delta involves more than two terms, then at the invisible grid of points (fixed resolution) the process as discrete time series may be  non-Markovian with many steps memory, while at the grid of visible points appears as Markovian with one step  memory only. 

Definitely the stochastic differential equations are vastly simplified as a subject, and again practically all stochastic differential equations are solvable almost with the same algorithm.

Chapter XII  Approximation of Functions

As functions are always defined only up to a resolution, both at the visible and at the invisible layer,  there is no approximation, but exact ontology. This subject is already included in the other subjects.

Chapter XIII  Approximation methods and computing techniques 

As functions are always defined only up to a resolution, there is no approximation, but exact ontology. This subject is already included in the other subjects. Here it is apparent that all of the mathematics of layer 1 have a direct implementation in computers.

Chapter XIV  Electronic Computing Machines

The mathematics of zero level without infinite may be called of course computer mathematics , as realizing them in computers , is entirely more easy and appropriate, and may be a good emotional motivation to work them out. But we should not miss the point that can be developed entirely on paper, and a teaching white or blackboard, without computers at all. They can also be used as an extensive manual in paper form for any implementation of mathematics in computers.

Chapter XV  Theory of functions of a real variable

The remarks made above for Analysis apply here too. The system of real numbers is a finite set of rational numbers in decimal representation, thus all sets to be used in measure theory are finite many and with finite elements. This makes all simpler , to simplify arguments, and make new more powerful theorems. There is no discrimination of Cauchy Integral, Riemann, Integral, Lebesque Integral, Daniel Integral etc All are the same for a single resolution. But if functions are defined simultaneously in many resolutions, then the same Integral, may have different values at different resolutions!

Chapter XVI  Linear Algebra

The mechanisms of solving linear systems of equations, and the basic theorems of Linear Algebra and Linear vector spaces remain, with an important point in mind: The very concept of Field of numbers and vector space, strictly speaking do not exist here.

The number system is not a field. To define a kind of closeness (or it should be called openness!) to addition, and multiplication or scalar multiplication, it is required to refer and discriminate between the visible and invisible points. The specification of resolutions is again here the new key.

Chapter XVII  Non-Euclidean Geometry

The same remarks made above for  Euclidean geometry apply here too. Because the congruence  of lines, triangles etc is only up-to-a resolution and accuracy level, and have always a finite length (they are segments) , we may have even in Euclidean geometry non-congruent  lines (segments), from a point outside a line, that still they never intersect within the geometric space (which is of course bounded and finite size "window")

Thus the very definition of parallel lines has to be done carefully (e.g. based on angles to a common intersecting  third line)

Chapter XVIII  Topology

As it is known topology comes from the continuity properties of functional and geometric entities. But as continuity is always here up to a resolution, topology too, reflects continuity properties only at single resolution. In addition all points are finite many. It can be based on the concept of two points being neighbor or in contact. We must also discriminate between visible and invisible points! These issues is new source of sophistication, as we have eliminated the infinite. The closure of a (finite of course) set of invisible points may be a set of visible points. Thus indepondency (closure of a closure is the same closure) of the closure operator may be here by definition. But if we define a closure operator on the visible points (to include all visible points that are in contact ) then the resulting operator is not indemponent! Therefore the axioms and initial concepts of of topology are different here!

Many arguments of topology become simpler, and many different types of topological spaces that make sense in classical mathematics of the infinite, do not make sense here.

Chapter XIX  Functional Analysis

This is a course that many things change completely! The remarks made above for analysis, differential equations, probabilities , topology etc apply here. Two functions are equal only up to an accuracy level. An the functions are defined only up to a resolution. 

To define Dirac's Delta (e.g. at zero) , we need two number systems, at two different resolutions. In the coarser resolution the function seems zero every were, and with value not definable at zero, while at the finer resolution, is not zero, everywhere, and finite at zero with a value larger than the largest number of the system of numbers at the coarser resolution .If  integrating at the finer resolution, it gives a number, existing, in the coarser resolution too and equal to 1. All are simple and there is no need for the twisted functional definition of Schwartz, neither of sequential definition of Shilov! Engineers would recognize in this definition of Dirac's Delta, what always had in mind but was never formulated and defined in mathematics, as the concept of finite system of quantities at a specific resolution, had never before been defined in mathematics

All the functions of  functional space are finite many, and the linear space dimension is finite too! Thus the complications of Unbounded, and bounded operators in Hilbert spaces do not exist here. All arguments become easier, and many new theorems of remarkable power can be proved. Although the functions are finite many, they may be large even for a computer according to the depth for a resolution. But making then so many so as to permit a computer to scan them, is always an optimistic attitude as far as trying to prove a theorem in functional analysis.

The interesting theorems are of different nature in the finite resolution functional analysis. E.g. Instead of proving that any "almost periodic" function is a limit of series of purely periodic functions, the interest here is similar to the Shannon-type theorems, concerning the size of the required information:

How large has to me a base of periodic functions to derive exactly at a resolution, and the visible layer a function? And similar many more questions relating the information at the side of base of functions, and at the side of visible accuracy level and depth of resolution.

The complications of the axiom of choice in set theory , disappear too! Let us  look to the celebrated arguments that proves with the axiom of choice in Euclidean geometry that we can cut a finite spherical ball in to finite many pieces, and to resemble them to make two spherical balls of equal radius with the original. In the light of finite resolutions in geometry, the arguments is essentially equivalent to that we can indeed do that but the derived new spherical balls are of lower resolution (so that the sum of the finite many points of the resulting new balls make in total the points of the original ball in higher resolution!)

 Chapter XX  Groups and other Algebraic Systems

Only the finite groups , of classical mathematics of infinite, and finite algebraic structure exist in exactly the same way here.

The infinite algebraic structures of classical mathematics do not exist in literal way of definition. In their place exist new entities, defined at two layers of elements so as to define a kind of "closure" or "openness" to the algebraic operations. An example of how this is treated and handled here is the systems of real numbers, at a resolutions, that has only a finite number of elements eventually. Of course most of the techniques of morphisms, categories, automorphisms, isomorphisms, inductive limits, systems of generators, free algebras, and other concepts of universal algebra, can essentially survive here too. 



 These papers were written by the author in order to resolve two important situations in mathematics, as he noticed while lecturing in the island of Samos that:

a) For 2 or more centuries mathematicians and physicists were writing equations where the infinitesimals were treated separately and together but as different than ordinary quantities. The present Differential Calculus makes use only of their quotients which are ordinary real numbers.

b) Till the 19th century all mathematical entities were generated by numbers (arithmogenic). Including all geometric shapes curves ,surfaces etc , after the arithmetisation by coordinates by Desquartes , Riemann etc. During 20th century , and after Cantor , the ontology of mathematics changed and all entities are created by sets instead of numbers. So, the question arises: Is it possible for entities like numbers or computer procedures to gain back the power to create all entities in mathematics?

The  present work resolve mainly the part a). But in order for  the results of them  to have  fruitful applications , the part b) is also resolved, which is also relevant with a major turning point in the history of mathematical thought, on his planet.

The true resolving of issue a) is through a redefinition of the system of numbers , as finite systems of finite (rational) numbers, with a finite resolution, (exactly as the computer represents the numbers but not only with single and double precision but also with many degrees of precision ) where the phenomena of "orders of magnitudes" can be formulated by concepts almost the same as the concepts of  non-Archimedean or pre-emptive orders of infinitesimals , finite and infinite numbers etc. As in every creation there is a phase of  preliminary artistic design prior to developments, here also the artistic and phenomenological abstraction of infinite numbers is a prelude to it. The discrimination between finite and infinite or various grades of infinite is simply a discrimination of transcendentally separated (meaning with large gap) areas of the finite that may have also different informational and logical determination. In addition  the concept of the abstractness of the infinite is also, a measure of how detailed and specific are the logical expression means and informational handling of the system of mathematical entities, which can always be assumed finite in the ordinary sense. 

The simplest concept of infinite arises when the cognitive resources of space or time in representing numbers or information or data-objects (for an individual or a group of minds, a standalone computer or a computer network etc), cannot represent a number  or data-object of the environment physical ontology, because it is too large. Then the alternatives are not to represent it at all or to represent it with an abstract object (e.g. a set) or with a symbol representing an unknown constant or variable, as this number may also change while we do not have the resources to count and determine it. This is the transcendence for the infinite

 The next papers belong to the old mathematics and from the point of view of the 7-layers, in layer 6 and above. As I see it at the present time (2005) the value of using infinity (besides its historic role of an earlier phase in the evolution of civilization), is mainly that it permits a distance from the finite material ontology, and permits free thinking which is felt better . Otherwise  practicing is hardly separated from thinking anymore and hinders thinking with many emotional traps too Therefore  infinity should not be taken seriously in its  literal sense, as a new different ontology as this would lead to a collective paranoiac dead end without hope  for practical applications or with more and more difficult practical applications. Not to mention that it would become like a "computer worm"  a mental seduction that would consume more and more time and would lead more and more away from reality and life. In my way of appreciating it today infinite  can be considered  mainly as a  metaphor  or "encryption code" of facts about the finite. Probably facts about the finite that the civilization had  not been entirely ready at its time of introduction.

Thus the concept of infinite is related with the limited measure of the chances of a group of   human minds either as natural or artificially extended, in dealing with the ontology and phenomenology of their environment world that surrounds them.  From this point of view we may consider as an early study of the infinite in ancient Greek culture the book "Psamitis" (which means "sand") of Archimedes about very very large numbers. Obviously what is objectively infinite changes as the collective cognitive civilization resources change. E.g. what is infinite when counting with pencil and paper, is different with what is infinite when counting with a computer, and higher order formal languages in Logic.

The dynamic concept of infinite: as a procedure that e.g. computes the digits of the number π and is terminable, only by an external artificial stopping time or length, and not intrinsically by the logic of the algorithm, is analyzed in the present approach as follows: We see in this concept two different elements, instead of one in the traditional mathematics (Traditional mathematics sees the infinite number of digits, of the one irrational number π). We see, at first the category of finite entities, which are the finite instances of π, (rational numbers), on which the algorithm applies, and we see also, a procedure or algorithm of a special type, and of finite syntax length too. These two entities are not to be confused, as one entity. Numbers are to remain always finite and rational, while the "cardinalities" or "ordinalities" of "infinite" is to be defined and analyzed, as complexity structures of these special types, of (externally terminable) algorithms. Thus the dynamic concept of infinite is to be reformulated separately, as finite length data, entities, and finite algorithms of a special type. The concept of algorithm  has many variations in computer science, but the basic alternatives of them have been proved equivalent. There are of course some logically non-equivalent gradations of the concept of algorithm based on the mutual combinations of the of their syntax-size , input-data  memory size and run time complexity bounds. And also we may think of new concept of algorithms that computer science has not formulated and study yet (e.g. algorithms that do not have a fixed syntax. They might have a fixed nucleus of syntax that itself can reproduce the rest of the syntax in a variable way, always of course with an fixed upper limit to its length), Still the discrimination between the number as a finite data element (a rational number) and  a well defined finite algorithm that acts on it and increases its information, should always be made. The diagonal arguments of the hierarchy of cardinalities and ordinals, would correspond to the diagonal arguments of "non-computability" of some decisions by some type of algorithms.

This "non-computability" is an effect that is almost always converted to that the complexity of an algorithm, that enumerates, what a system of other algorithms does, and themselves too, should have higher complexity that all of them, and if there is a constraint to that complexity in its definition, then obviously it cannot be written or run, as such a type of algorithm. The diagonal arguments of Goedel in Logic about the "non-provability" of some formulae, are again converted to such effect of an impossibility due to complexity constraint. E.g. if we want a software to verify the consistency or inconsistency of all possible paths and events of an other software system, then  this has to be of a very larger (memory space, and run time) complexity, compared to the software that has to analyze. And if we have put an upper limit to this complexity, which may be non-adequate, then we result with an impossibility. The impossibility becomes always again a possibility of course, if we relax the complexity constraints.

We must not forget that the concepts of infinite and irrationality (of  numbers) refer and are properties not of  the ontology of the entities of study but of the states of consciousness (individual and social or collective) of the subjects that study and make the knowledge. It is the evolution of culture and societies, that a century and a day may come that this may change to finite and rational.

The true resolving of b) should be again by an extension of the concept of natural number with that of computer data-object,    where not only a complete order but in a more economic way, non-complete or partial orders between them are meaningful and hold.

Pythagoras was referred to believe that "figured numbers" is the key to universality. Part b) is for the moment resolved with a 7-layers description of any mathematical entity. The key is layer 5 which is countable model of ZFC-set theory where each set is computable and  consists of computable sequences of finite trees created by algorithms. In addition this model (or set theory is unique up-to-isomorphism of the Î relation of belonging (categorical axiomatic system with stable semantics).The next papers contain the details,  in relation to part a) . But there is a hint in their introduction about a model of ZFC-set theory with sequences of finite sets of cardinality at most of the continuum. This model can be improved to be of computable sequences of growing finite trees, and the trees can be defined to consists of numerical digits or alphabetic characters. Thus much like the world created by a computer with itis 0,1 bits. The details of how this models ZFC-set theory resolves part b), together with some new concepts like that of internal and external consistency and 1st and 2nd completeness of 3-valued Logic,  is something  that the author might  present in  detail in the future. These results are not contradictory with the first or second incompleteness theorems of Gödel , but rather they complete Gödel. With this I mean that instead of assertions like "It is not possible to prove within the formalism of a system that it is consistent" we are interested rather to assertions like "It is not possible to prove within the formalism of a system that it is inconsistent" [internal consistency] or "It is  possible to prove and we prove outside  the formalism of a system that it is consistent" [external consistency]. And instead to assertions like "There is a statement of the formalism of the system that cannot be proved from the axioms of the system" [1st incompleteness within 2-valued Logic] we are interested to assertions like "For any statement of the system, it is decidable if  it is independent from the axioms or not, although this does not mean that that it is decidable if it is true or not " [3-valued Logic completeness]. A 3-valued Logic complete system need not be a (2-valued logic) decidable system. In a decidable system you know everything, but is a 3-valued complete system you know what you do not know.  

The 7-layers approach to the creation of mathematics proves that all of the known mathematics can be created with a succession of enhancements of layers such that each layer has as logical model by interpretation the previous layer. 

(As a visualization I prefer a horizontal , image of the 7 successive enhancements of mathematics, than a vertical building-like image, that may unavoidably make some think of the old tale of the building of Babel...)

Layer 1: It is a finite entity (like a microprocessor in computer science) or a finite set of finite entities (trees)  , with only finite many propositions in the formal language and the system is decidable [2-valued logic complete] (like the bounded resources of disk and ram in a computer). In this layer everything is decidable and known. We can have alternative set theories in layer 1 , so that not only all sets are finite (and of finite rank) created from the empty set,  but also there is an upper bound on their cardinality (horizontal size) and rank (vertical size) which gives that there are finite only different (finite) sets. Such models of such set theories have a representation inside the computer as say file-folders with the belonging relation € corresponding to the obvious relation of a file-folder  belonging to an other file-folder or at a higher layer as tables of records of a finite memory size Data Base, where a record represents already the information of an other table.. We may call such set theories "Computer Data-Structures Set Theories"

  Layer 2: It is  again a decidable system of finite many finite entities (trees) but the acceptable formal propositions , and proofs, are computable many (maybe infinite. Thus the infinite enters here but in the language first).(An analogy with board ram and hard disk in the computer) 

 Layer 3: It is  a system of countable many finite entities (trees) with computable many formal propositions and proofs such that although it is not decidable, it is decidable if a propositions is not provable [3-valued Logic complete, much like the user of a software that has been granted less responsibilities relative to the programmer but he knows what he cannot do], So although in this layer, it is not everything decidable known it is decidable known what it is not known. (It is like the machine language and the operating system in the computer). 

Layer 4: It is  a countable undecidable system (the, as I call them, Pythagorean sets or trees, like the 1st-order Logic or Natural numbers. Pythagorean sets are finite sets (finite horizontal size) and of finite rank (finite vertical size)) , and the acceptable formal language is also countable and undecidable. (like a programming language computer science). In such a system there are not any infinite sets except of the empty set which "looks" like an infinite set!  It can be considered that the ancient mathematical though is described till the layer 4. The modern scientific and mathematical thought is described from layer 5 and on.

 Layer 5:  This is a very critical layer. It  is  a countable system of  "weakly computable" sets  and the acceptable language is also countable and undecidable. (It is like the RDBMS of a programming language). In this layer for the first time each object of the system can be an  infinite object (in spite the fact that computer programs can have only finite lenght). As it is computable, the syntax of the program to compute it is again finite. It is not at all easy to prove that such a "simple" system can be a countable model of the Zermelo-Frankel ZFC-set theory or BGC-set theory that accepts infinite sets. Again, how a computer would represent sets, is the guiding idea. This model of set theory has semantical stability in the sense that it is a categorical theory (all models of it are isomorphic up-to-the initial concepts, a property that the theory of real numbers and Euclidean geometry has too). It is important though to notice that this  "weakly computability" is  a triple of 1) an algorithm or timer that counts time steps of a clock, 2) an general algorithm or an algorithm that is  a "perpetual program" (in other words an ordinary algorithm that is terminating not internally but by the constraint that it has run so much time till now)  and 3) an initial input Pythagorean finite set. Such a triple is assumed to represent a set e.g. like the sequence that computes the irrational number, square root of 2. The perpetual program may change the initial Pythagorean set and produce finite instances, till it stops. The timer may vary the time that the algorithm is applying. Nevertheless if we try to represent the overall process of increasing also the time duration of the perpetual program we result with nothing else than a virus, that consumes the resources of the computer till we close it! Such a model of ZfC may be called a "Computer Processes Set Theory"

  Layer 6: Which is the well known   universe of Zermelo-Frankel or Bernays-Goedel set theory with the axiom of choice.  (like a data-base application from the point of view of a user). 

 Layer 7:  Which is  a classical theory like real numbers or geometry or geometric manifolds or functional spaces, or random variables and processes etc. (It is like an application software). 

The analysis of layers, itself is assumed in to use as context,  the layer 5. This proves that any mathematical entity, can have many different dimensions of existence according to its view from a corresponding layer of the creation of mathematics, and level of externality of the user of it. There is no  contradiction at all in its different properties at each layer. At each new layer because of reasons of abstraction we eliminate some of the information (axiom) that specifies the objects, which still holds but only in the lower hidden layer. This approach is an elaboration and a more synthetic and organized system of concepts that the classical two-part theory-model. So any mathematical entity is at the same time: 1) a unique finite entity, 2) a finite entity of a decidable system,3) an entity of 3-valued logic complete system ,4) simply a finite entity of an infinite universe, 5) an infinite but computable entity of an countable universe 6) a set of ZFC-set theory of some cardinality possibly very large ,7) a classical mathematical entity like an irrational number or a line of Euclidean geometry or anything else. From layer 3 or 5 and beyond the gap between "thinking-deciding", "acting" (constructability) and "having" (ontology) increases and we come to the phenomenon of mathematical theories that are more "oracular" or "mantic" , (which in classical terminology of philosophy of mathematics means highly non-constructible, based on existential assertions only and are not categorical axiomatic systems.) than technological- rational and practical activities. The non-categorical character of the axiomatic systems after layer 5 and their "oracular" character resembles, the situation of the relation of the human consciousness and the reality of its physical existence. The way that layer 6 is modeled in layer 5 is critical, as layer 6 is not categorical while layer 5 is and it shows how the infinite can be modeled in the finite so that "the ontology of the infinite is the phenomenology of encrypted changes of the finite where information about the changes is missing (abstractness)" Or to put it in an even more clear statement: The discrimination between the ontology of the finite and infinite or various grades of infinite is simply a phenomenological encrypted discrimination of transcendentally separated (meaning with large gap) areas of the finite that may have also different informational and logical determination. In addition  the concept of the abstractness of the infinite is also, a measure of how detailed and specific are the logical expression means  and informational handling of the system of mathematical entities, which can always be assumed finite in the ordinary sense in the ontological layer. The simplest concept of infinite arises when the cognitive resources of space or time in representing numbers or information or data-objects (for an individual or a group of minds, a standalone computer or a computer network etc), cannot represent a number  or data-object of the environment physical ontology, because it is too large. Then the alternatives are not to represent it at all or to represent it with an abstract object (e.g. a set) or with a symbol representing an unknown constant or variable, as this number may also change while we do not have the resources to count and determine it and while we reason about it. From this point of view it is an abstractness, a lack of specification, and a encryption approach too. This is the transcendence for the infinite

Thus the concept of infinite is related with the limited measure of the chances of a group of   human minds either as natural or artificially extended, in dealing with the ontology and phenomenology of their environment world that surrounds them. From this point of view we may consider as an early study of the infinite in ancient Greek culture the book "Psamitis" (which means "sand") of Archimedes about very very large numbers. Obviously what is objectively infinite changes as the collective cognitive civilization resources change. E.g. what is infinite when counting with pencil and paper, is different with what is infinite when counting with a computer, and higher order formal languages in Logic.

The dynamic interpretation of infinite, as an algorithm that increases the finite is different. It has been used often as a way to keep a distance of the material, or human action, ontology of the finite, especially when the thinker considers it undesirable or an obstruction  in his attempts to  think about the situation.

 Thus some arguments became simple and elegant in such systems (like a nice and friendly interface of a complicated software system) , but this should not be pushed to its limits, and other types of properties of the same finite system require different axiomatization which has to be devised or updated from time to time! This 7-layers approach shows the coexistence of different philosophies and mind-styles  like ancient Pythagorean or Euclidean  mathematical thought , Intuitionism, Logicism, Formalism etc. In particular it becomes apparent that the Hilbert-program of Formalism is possible to be integrated for all of mathematics indeed ,but only till a lower hidden  layer of them. This approach shows also in full detail how classical mathematical object like a Euclidean line or irrational number like pi etc are created by computers and although infinite and possibly uncountable entities, still at some hidden lower layer are computable or even finite.

In view of the layer-5 of the above approach , all of the transfinite real numbers , or surreal numbers, or Ordinal real numbers , have an exact interpretation as size of concurrency complexity of computer procedures! Concurrency complexity is a new concept, it is not space or time , or resources complexity, but a measure of  dependence of procedures in parallel programming.

 This restores the link of  human thoughts with practical human actions. A link that apart from practical applications is important too, for the integration of  the human subjective state. Strange as it may seem, it holds that ,the creative world of finite has more choices and freedom for the mathematician, than the creative world of the infinite.

 Although the ontology of the infinite seems radically different than the ontology of the finite, it may turn out without contradiction somehow, that there are logically valid interpretations, where  the ontology of the infinite  is created by some relations in the parallel computations on the finite. It is also created by an abstraction on the  time states of an object that is gradually created. The abstraction is that we are considering it as the same entity in all its successive states  of creation and the interruption or stopping of the perpetual process is not internal from the procedure.. This is the way to transcend from the dynamic concept of Aristotelian infinite to the Platonic static infinite. Although we may accept creations of the mind as Cantor was requiring, when it comes the moment to link thoughts with human actions and operations , we have to  supplement the abstraction with exactly the missing information of it, that unlocks the concepts to practical applications. Strange as it may seem, it holds that ,the creative world of finite has more choices and freedom for the mathematician, than the creative world of the infinite. Although the initial impression was that G. Cantor was leading mathematics to his paradise, it finally resulted to Cantor’s Hell. (Cantor himself, died mad in the sanatorium). If we try to discover the closest concept to infinite in the world of finite (as we shall see in the sequent) we immediately realize that infinite is the totalitarianism in mathematics, while the world of finite permits real conceptual democracy of creativity. As any totalitarianism seems attractive and might feel good at the beginning but sooner or later it results in to a totally wrong and destructive role by its users.  From this point of view to abstract and transcend from the finite to the infinite (or infinitesimal) is somehow the same with an encryption in the sense of a logical formulation of the finite with missing information or as the discrimination of the hidden (programmers control) and visible (users control) part of a software application system. Thus to translate it in to the science of informatics (computer science)  the (Zermelo-Frankel) axioms of set theory refer to how to create both data entities and procedures from other data and procedures. The  mental images of set theory, for an infinite set, point not to the syntax of the procedure that creates it, but on the fast changing successive states of data created by the procedure. So we should not be surprised if after all the transfinite numbers, for many practical applications, turn out to be interpreted in addition to concurrent complexity also as  ordinary rational numbers of  significantly very  different resolutions of grids (orders of magnitude). The E. Nelson's (at Princeton) approach of internal and external real numbers is a hint on this , and it seems to me that even simpler and  more transparent definitions and interpretations can be derived within standard mathematics.

 We must remark of course that if layer 7 (e.g. Real Numbers , or Euclidean geometry etc) have a model in layer 6 , and layer 6 has a model in layer 5, then , transitively , real , numbers and Euclidean geometry have models within layer 5 too, where all sets are computable (including special kind of non-terminating algorithms to the definition of computability) ! This is how a computer scientists would try to interpret the classical systems of real numbers and Euclidean geometry inside the computer. To see how it would ever be possible to result with countable continuous system of real numbers (model of real numbers in Layer 5), we just notice that if in the definition of the real numbers as completion of the rational numbers by Cauchy fundamental sequence we put the restriction that the fundamental sequence is not any but a (weakly) computable one by a computer program, then we get a countable system of numbers representing the continuum, although by the diagonal argument non-computable anymore! But further-more we do not really need in the definition "all computable sequences of .....". We could as well define the real numbers by a an algorithm which refines say a finite segment of the  decimal lattice with finite digits. Then the number system itself has an algorithm to derive it, thus it is also computable. There is no need that the algorithms computing a number and the algorithm computing the number system itself have any sequentially dependence (sequential programming, an assumption corresponding to that the real numbers have a higher cardinality than the cardinality of the digits of one real number) as they can very well have concurrency dependence (parallel programming ). In a similar way most of the infinite ontology of layer 6 when interpreted in layer 5, reflects this particular type of "sequential" dependence of algorithms that define them. No other computational dependence is permitted by the axioms of set theory (like replacement axiom , power set etc) which is a serious drawback compared to what can be defined between algorithms. "The simpler the computational complexity the better" assumes the computer scientist, which is translated in to that the creation processes available in set theory when translated in layer 5 in to algorithms to create new entities do not correspond to good enough practice and low enough complexity in the final procedures. The available freedom in composing algorithms that increase finite sets is lost.  To derive pre-emptive or non-Archimedean order of the numbers that corresponds to transfinite numbers as in surreal numbers, we should require a modification of the equality of two numbers as fundamental Cauchy-sequences , and put other definitions that involve more information from the algorithm that computes them. For example we may be interested not only "where" the algorithm converges but mostly "how" it converges there. We may notice that such ideas are close to the critique of the philosophical school of intuitionism and neo-Pythagoreans to the rest of mathematics, at the beginning of the 20th century, except that they are even more restrictive than the intuitionistic techniques in to the next: We do not include "arbitrary sequences by personal free will..." but only algorithms that can be repeated by any mathematician or not. Obviously in geometry too, which is  defined after such a number system, only points defined by some algorithm are included. This guarantees that the number system represent the geometric continuum (at layer 5). Surprisingly enough in spite the similarities of this approach with the critique of intuitionism, a reversed slogan seems more appropriate: Instead of the slogan "Natural numbers are made by God, all else by man" we should put it like  " Only finite entities are to be made by the activities and the mind of ordinary present human beings.  And from the land of finite only a limited part is for the human mind, the human consciousness and the human practice." . The land of finite is today certainly happier and of more solution possibilities. The discrimination between finite and infinite or various grades of infinite is simply a discrimination of transcendentally separated (meaning with large gap) areas of the finite that may have also different informational and logical determination. We should remark that the previous example of practical countable model of the real numbers, is absolutely distinct and different from the countable model of real numbers that the Lewenheim-Scholem theorem predicts. But even this is not adequate as necessary clarification and enhancement of ancient and medieval  or renaissance age,  mathematics, to 21st century mathematics. In fact the closure itself of the system can also be substituted with a concept of successive closure of operations from a resolution-lattice or grid to a finer. In other words we may as well define  the system of real numbers  as finite set (model of real numbers in Layer 1, of course of different axioms and algebraic structure!) almost exactly as real numbers are represented in the computer, in any computer programming language , in single or double or higher precision. To develop a finitary interpretation of the infinite, we must define a new concept, that of Limited Model, or Instance of an axiomatic theory . The definition of a limited Model or Instance, is as the usual definition of a model of an axiomatic theory, in Logic , except that at all universal quantifications on  the set (like "for all natural numbers"... , or  "for all real numbers..." ) it is not referred really to all the elements of the set, but to a limited subset of it. So "for all natural numbers" may mean for all natural numbers less than a limit number nwhich is used through out in all logical arguments in the theory. After  the concept of limited model or instance, even large theories like the Zermelo-Frankel theory have limited models, or instances that are  finite sets of finite sets! All cardinals, and ordinals are interpreted in this way as finite integers, while the order between them as the order of natural numbers! Therefore the axiomatic models of Natural numbers, Zermelo-Frankel set theory, Cauchy-Dedekind real numbers, transfinite real numbers, surreal numbers, ordinal real numbers etc have finite , limited models or instances that consists of rational numbers, representable in the operating system, and a programming language of a computer. In such systems of finite limited models, or instances, all concepts like finite, countable infinite, uncountable infinite, became logical grades of the usual order of finite natural numbers. The discrimination between finite and infinite or various grades of infinite is simply a discrimination of transcendentally separated (meaning with large gap) areas of the finite and finite procedures of it, that may have also different informational and logical determination. 

It seems that we forget that the classical axiomatic system of real numbers and Euclidean geometry as well are not really realizable with terminating algorithms in a computer, and any literal realization, produces simply computer viruses!

 Should we start thinking of the infinite as "viruses" in abstract thought too? If the infinite is the inability of the cognition tools to specify the size of the finite cardinality of the studied entity, or if this finite size is dynamically changing during the cognition process, why should at all exist a knowledge or description of it? And if there is, why should we accept it as adequate or try to found everything in this imperfect cognition under such an unfavorable situation? Propositions about "all the natural numbers" might be meaningful only if this "all natural numbers" is always a finite system, and in addition of size close to the human reality. To demand "proofs" while this system may change size, and during the processes of the proof, is an additional difficulty for the arguments. Why should we accept as "proofs" only such restricted types of logical acrobatically dangerous process? Take for example a non  possible to prove yet, proposition in the natural numbers. Why should we look for "proofs" that are logical procedures, for a finite system of numbers, that we have hidden, its exact size? Or in addition for a system of natural numbers that although finite, and of unknown size, during the logical argument of the proof, may change size and increase? Is it an appropriate human spiritual habit, to accept the proposition proved, only if we can device and argument under such restricted and unfavorable circumstances? Would it be more perfect for the cognition, and of honest human interest, to require for a proof, of its truth or not, only if we refer, to a fixed not-changing finite system of natural numbers and of size that makes sense for the human world, and only if we specify the upper bound of the finite size of the proof according to the symbolic tools?

This is not something that can be overcome in a roundabout method by some say "numerical analysis" reduction of calculus etc, as this does not really solves the problem. And it is not simply a gap between two sciences that of mathematics and that of computer science. It is rather a gap of the present phase of the sciences and older, preserved till now, phases of the sciences. Its resolution means a thorough re-examination of the very foundations of mathematics, and an update with the present state of the art in thinking and manufacturing.

 For a  requirement of classical computability with terminating algorithms, the real line, and any figure of the Euclidean geometry must be endowed with at least  specific highest  resolution. In fact for most of the (elementary) mathematics all such entities must have  at least two resolution layers: one ontological or hidden and finer , and one phenomenological or observable and coarser, and both are finite sets. The equality of the phenomenological layer is only an equivalence relation over the hidden ontological layer, with partition in to equivalent classes, or rounding classes. This requires that all geometric equations (e.g. the Pythagorean theorem) are never equalities in the classical sense but always an equivalence relation after rounding up-to-a resolution not finer than the ultimate ontology of the figure (or image ) in the computer graphical interface (display screen). This is not an "imperfection" or "approximation" of computation, but a new information property of the ontology of the mathematical entity and exact relations. Therefore  there is the requirement of new axioms of the real numbers and new axioms of Euclidean geometry . It is of course obvious that the traditional algebraic structures of  group ,ring ,field, vector space etc are not appropriate any more and new modifications of them are required. Happily modern universal algebra studies practically all types of algebraic structures. If ancient Euclidean geometry as the Hilbert axiomatic system was a historical phase 1, and Cartesian analytic geometry, together with rational  numbers was phase 2, the phase 3 requires new axioms of real numbers and Euclidean geometry to account for the fact that we always mean that the mathematical entity (number or figure or function ) has a finite resolution. Thus Euclidean geometry and the system of real numbers should be reformulated in a new finitary way, so that each line segment (e.g. in the Hilbert axiomatic system, or in the Cartesian analytic geometry or vector space definition) and the number system always have  at least two layers: one ontological or hidden and finer , and one phenomenological or observable and coarser, and both are finite sets. The equality of the phenomenological layer is only an equivalence relation over the hidden ontological layer, with partition equivalent classes, or rounding classes. Relative to two such resolutions , the differential calculus has  an exact interpretation ,(where equality is the rounding equivalence relation, defined appropriately so as to be a transitive relation too, e.g as partition of the fine resolution in to balls around the points of the coarse resolution), and we may talk about multi-resolution Differential Calculus. Thus the old Newtonian symbolisms of the fluxions ox of the number x can very well be interpreted by appropriate rounding relation in appropriate finite resolution(s).The numbers would be at an observable resolution, while the fluxions ox (or Leibniz infinitesimals dx) would be at the hidden finite resolution. Each layer has its exact operations and the observable induces rounded ones in the hidden layer. And similar simple and transparent interpretations can be made for the Leibniz symbolism dx (and dx is  finite rational number of course but when geometrically represented it is below the threshold of the human visual discrimination) with many advantages over classical definitions with limits. There was an important reason that the great Newton and consequently Leibniz too, chose two different types of numbers: the infinitesimals and the finite. In the present approach both are of course rational numbers, but of different status in the overall structure. E.g. The  Pythagorean theorem on a triangle, is an equality, only on the phenomenological layer, and an equivalence relation in the ontological layer. So the number square root of 2, is unique only on the phenomenological layer of the number system (equality up-to-rounding), and many (but finite many) rational numbers, in the hidden ontological layer, (with decimals more than the significant for the phenomenological layer). If Newton was not so much isolated during his 20s and 30s

in his creative work ,  if he would had met an earlier sympathetic appraisal and accompaniment by sufficient many in his creative work plus most probably a direct support of a personal rather than creative character he would probably not suffer as he did in his late 40s and later.  

Thus the calculus with limits could be a simplistic design which is a prelude to a really enhanced calculus without limits in finite resolutions. If the system of numbers is a finite resolution algebra, then even the epsilon-delta technique of Weirestrasse in calculus , does not have  any longer , an interpretation as limits , but as equations up-to-rounding. The claim of the usual Weirestrasse calculus by limits, and the usual Cauchy-Dedekind real number system, for an ontology and properties on all finite resolutions, in practice and in consciousness goes beyond the limits of the place of the human practice and consciousness. As human beings we are interested and we can control by our practice only a limited range of finite resolutions. Therefore we should make our theories for a fixed, although maybe variable finite resolution within some limits. A continuity and differentiability in the usual calculus with limits gives continuity and differentiability up to a pair of  resolutions (roughly speaking corresponding to the epsilon-delta choices). But not conversely. A continuous or differentiable function up-to-a pair of resolutions is not necessarily continuous, or differentiable in any other resolution!

  I believe that we must have a correct sense of the historical necessities in the evolution of mathematics. A geometry created to be used with drawing on the sand or on papyrus and when physics did not knew the atomic structure of matter, should be as Euclidean geometry, but if it is to be realized with modern multimedia and computer techniques of image processing must be a different axiomatic system. A calculus created to be used for calculations of astronomy or physics in a time that it was not realized in the sciences the atomic structure of matter, and that would take place by hand and pen on paper should be as was suggested by Newton and Leibniz or also as it was changed and developed in the usual system of real numbers in 18th century, much artistic as it is, and of low symbolic computation complexity. But a calculus or functional analysis to be computed with modern computers and in a age where the atomic structure of matters is realized in the sciences, should have a different axiomatic system for the real numbers that should admit finite models of them too, and different definitions and concepts for functions figures manifolds, random variables etc. In short older quantitative mathematics intended for calculation on the paper and for a time that chemistry and physics did not know the atomic structure of nature, cannot be the same with modern quantitative mathematics intended for computations in the computers and when the sciences have realized the atomic structure of matter . The former were only a prelude to the latter.

In the construction of the usual system of the field of real numbers, axioms like the closure of the operations, and the continuity axioms of Dedekind or Cauchy, or Cantor, simply reflect the early obscured stage of the theory of phases of matter in chemistry and the natural sciences, that where still bisecting physical substances without having yet found any first bottom, or realized the atomic structure of it. No matter how brilliant and sophisticated might be for its time, it is like a centuries old software, that has to be updated with an additional one that the real numbers are rational , have  finite bounded resolution and admit finite models.

The dynamic interpretation of infinite, as an algorithm that increases the finite has been used often in the past as a way to keep a distance of the material, or human action, ontology of the finite, especially when the thinker considers it undesirable or an obstruction  in his attempts to  think about the situation.

In terms of the 7-layers , the above remarks mean that we may reformulate almost all our fundamental classical theories in mathematics, like geometry, numbers, analysis , set theory etc, so that are modeled directly in layer 1 , where all sets are finite! This has tremendous control (ontological, logical and computational) advantages. E.g. all functions, are of finite information, can be considered vectors of finite dimensional spaces, all differential geometry, is of finite information, and all of the functional analysis is of finite information and in finite dimensional spaces! We must admit that the multimedia techniques in computer science that create the sense- continuums of image, sound etc have already suggested how, from,  finite only, sets,  we can get the behavior of the classical continuous entities, in a logically different way from  the classical mathematical definitions that require them to be "infinite"! This reformulation of basic mathematics is I believe necessary and puts back the human cultural concepts in the  true landscape of the  human mind which is meant to be linked with the works of practical activities. The revealing missing information for this is the resolution specification of the entity (number, figure, manifold, function , random variable, or even set, etc) which can vary , but can always be only finite. That this enhancement and simplification is a great advancement, can be realized from the fact that it reflects the way in which the physical reality exists and is evolving in its atomic particle structure. It is natural that the age of the knowledge of atomic structure of matter, requires a similar finitary atomic structure in the mathematical ontology too.

Thus not only the differential equations of the calculus, may take a new meaning not through limits, but as rounding equivalence relation up-to-a resolution, but also the elementary algebraic equations of geometry. This apply even to the Pythagorean theorem on a orthogonal  triangle. Such equations are no longer "exact" equations with no resolution specification (a visual-phenomenological exactness based on the senses phenomenology rather than on logical ontology), but are always exact as rounding equivalence relations up-to-a resolution. Thus the number pi , as quotient of the circle length to its diameter, is no longer unique, but depends on the resolution, that defines the circle, and the diameter, and is always a rational number! There is no such a thing as one "circle" that we "approximate" with rational numbers or many polygons, but there are rather many finite-point circles always up-to-a resolution, which are absolutely exact, and this is all that there is! Maybe we must escape from the phenomenological and visual monarchy in the mind to creational-practical ontological democracy of the mind, which is in harmony with the work of practical activities.

  Classical mathematical ontology does have the property of atomic structure (lines e.g. are sets of point) but in an artistic and visual style rather than practical realistic which requires to be of finite only indivisibles too. The abstraction of infinity as defined during 18th, 19th century and also by G. Cantror is also an artistic phenomenological abstraction, therefore of an early phase of the creation of the continuum and mathematical ontology, it  has expiration date, and must be replaced with the next phase in the creation which is  the present and future practical and creational abstractions, based on the finite and the concept of finite resolution .

In the 20th century, the technology and art of cinema, showed for almost a century to the collective mind, how the continuum can me created by an invisible finite (the finite number of pictures per a time unit , sufficient many as to create the senses effect of the continuum). Then the computer multimedia extended and refined it to the  high degree of the present new millennium, sophistication and perfection. At the same time in the science of physics, the concept of the atomic structure of all continuous matter, became a widespread and well-explained concept. These developments in sciences, technology and the arts, show how the early concept of infinite, as an accounting of the invisible micro-structure, say of a continuous geometric line, that was mainly created during the 18th, and 19th century, can be substituted and updated with a true invisible finite structure. The gain from this switching from the infinite to the finite, in mathematics and the sciences can be tremendous.

  The dynamic interpretation of infinite, as an algorithm that increases the finite and also as am ultraistic way of talking about the finite, has been used often in the past as a way to keep a distance of the material ontology, or human action situation,  of the finite, especially when the thinker considers it undesirable or an obstruction  in his attempts to  think about the situation. 

Maybe in the past centuries when the classical theories where created and when computers where not the usual widespread practice, this true land of human mind, mentioned above, which is based on the finite, was inaccessible to tame spirit and was obscured in intractable wilderness. Most of the thinkers that were sensitive enough would refuse to create in the land of finite, as they would usually harm their consciousness as  creators. And those monopolizing it in the past, were very often using it with an undesirable and injuring impact on the consciousness of their audience. Therefore the infinite as more artistic and phenomenological, was felt better! There is  probably a good reason for this situation as an early phase in collective thinking. At the beginning the mathematical thinking is introverting rather than directed outwards to visible reality. This creates an unconscious or subconscious realization of the functions of the infinite at the very subjective or even bodily functions of the thinker. Thus unconscious or subconscious the object of his thinking while thinking of the infinite, is almost  himself! Therefore he puts unconsciously barriers to his own intellect, as that of the concept of infinite where, the ontological status of the studied may even dynamically change while an argument about it is carrier out! The very specification of a magnitude as of limited size, or even changing by a law, would automatically create an measurement intervention to the thinkers state, that might reduce his freedom and clear power of reasoning. The reader might me familiar to how the state of physical system in the microphysical world (quantum mechanics) is influenced by a single measurement. E.g. the measurement of the position in a quantum oscillator, automatically, practically, stops the oscillator. The analogy to the subjective consciousness of the thinker in the place of the physical system is clear. Thus the abstraction of the infinite acts also not as an early imperfection but also  as mature protective encryption of the creative mind. This explains why the "infinite" was "felt better" and preferable by the early thinkers.  This was as far as I know a most  important factor for many creators, that preferred to think and create in the land of infinite. But today, almost half century after the spread of computers , the "magical background" of all this situation has changed. It seems  natural to proceed  and re-found the mathematics in the finite, the true land of the human mind, as kind and sensitive spirit may have a home there too! It is not a restriction but rather, an advantage for the cognitive process. Although the infinite as a way of speaking about entities and changes, has added in the past psychologically to the creators, its natural evolution is I think, to elaborate and transform to sophisticated analysis of the ranks of the finite, and thus add to a better link of thinking , feeling and acting (applications).

This in a historical perspective of the evolution of mathematics  means to proceed 1) from the ontological specifications of the ancient Chinese, Egyptian and Greek Mathematics directly to 2) the introduction of calculus in the 17th century and then skipping the infinity tricks with series, of the 18th century, but also the definitions of the infinite real numbers of the 19th century plus  also the infinite sets of the 20th century, to  proceed directly to 3) the 21st century finitary techniques for all ontology and the continuum by  computer science.  This has nothing to do with becoming "mechanical" in the contrary it may mean becoming elegant, simple and efficient  but also honestly human , practical and detailed, while being implicit or explicit.

The next papers were created when the previous philosophical ideas and choice of techniques had not yet been integrated in my mind. Fortunately the next papers as well as  many  from  the mathematics of Layers 6 and 7 admit an interpretation or "unlocking"  in the layer 5 and 4 (as concurrency complexity in representing parallel computations with rational numbers in the computer). In addition they be a preliminary phase of the development  of a mulri-resolution Differential Calculus over finite systems of rational numbers, which is mainly a finitary creation. How this may be so   must be a future creative task. There is no-doubt nevertheless that there are advantages and the need for scale-sensitive description of natural and social phenomena. To develop a finitary interpretation of the infinite, we must define a new concept, that of Limited Model, or Instance. The definition of a limited Model or Instance, is as the usual definition of a model , in Logic , except that at all quantifications on  set (like "for all natural numbers"... , or  "for all real numbers..." ) it is not referred really to all the elements of the set, but to a limited subset of it. So "for all natural numbers" may mean for all natural numbers less than a limit number nwhich is used through out in all logical arguments in the theory. After  the concept of limited model or instance, even large theories like the Zermelo-Frankel theory have limited models, or instances that are  finite sets of finite sets! All cardinals, and ordinals are interpreted in this way as finite integers, while the order between them as the order of natural numbers! Therefore the axiomatic models of Natural numbers, Zermelo-Frankel set theory, Cauchy-Dedekind real numbers, transfinite real numbers, surreal numbers, ordinal real numbers etc have finite , limited models or instances that consists of rational numbers, represent able in the operating system, and a programming language of a computer. In such systems of finite limited models, or instances, all concepts like finite, countable infinite, uncountable infinite, became logical grades of the usual order of finite natural numbers. The discrimination between finite and infinite or various grades of infinite is simply a phenomenological discrimination of transcendentally separated (meaning with large gap) areas of the finite that may have also different informational and logical determination relative to the resources of the cognitive system. For computer multimedia applications, the parameters of the limited model or instance say of the real and ordinal real numbers, maybe chosen so, that the infinitesimals are pixels, that fall below the visual discriminating  threshold, the time-infinitesimals must be of a finite size lower the time-interval required to have the visual effect of motion, etc. For applications in physics, the parameters that define the discrimination of finite, infinitesimal, infinite etc, come from the structure and function of the physical reality itself, that falls in to layers of discretised material units, like Planets and stars (layer 1), protons neutrons  and electrons (layer 0) , even finer yet undiscovered permanent particles (should we call them aetherons?) that make the known classical fields like the electromagnetic , gravitational etc (layer -1) etc. Most probably the actual sizes of the pixels of the layers of the physical reality (relative sizes of aetherons to protons and to stars etc) follow a geometric progression. (E.g. with a ratio that of a proton to the size of an average star!)

Even the mathematical axiomatic systems have to be updates in the evolution of the civilization, as new requirements appear in the societies. 

Next Page