computer science
Both of these theses have distinguished supporters. The second was particularly asserted in France by Professor Jacques Arsac in his book La Science informatique (Dunod, Paris, 1970). If we adhere to the empiricism of the first thesis, we can note that the automatic processing of information, especially by computer, involves a large number of scientific disciplines in a field of application very close to organization (scientific, industrial, administrative).
Thus, it can be said that computing is an interdisciplinary field, where current computers, intellectual structures (scientific calculation algorithms), and institutional structures (accounting organization, industrial organization) practically determine its content.
The existence of computers has indirectly drawn attention to electronic circuits and renewed interest in Boolean algebra, for which the computer serves as an excellent application. For similar reasons, the theory of automata and the theory of languages (both natural and artificial) draw new value from the existence and functioning of the computer.
The machine itself, by its computational power, enables the use of certain mathematical tools that were known but previously unusable. Thus, developments in linear algebra or statistics have significantly expanded the field of applied mathematics. Through the creation of new, more powerful and often more economical calculation algorithms than traditional ones, a new conception and methodology of numerical computation have emerged. The machine can also guide mathematicians in their research by providing heuristic approaches that either give meaning to or disprove certain conjectures.
Furthermore, the machine, through coding, can process not only numbers but also letters, punctuation marks, and therefore texts. The processing of linguistic information thus extends computing into the humanities. Documentary or educational applications in particular, not to mention attempts at automatic language translation, involve fields such as physiology, psychology, and social psychology in their experimentation. Simulation processes, operational research, and game theory, among others, allow computers to practically intervene in all areas of human activity.
However, this prodigious rise of the "intellectual" machine should not overshadow the relative poverty of computing as a science. Norbert Wiener's cybernetics and Claude E. Shannon's information theory do not seem to have exhausted the generalizations of any possible science of information but rather account at the highest level for what computers are and do. It is likely that, considering the complexities of operating systems (real-time and time-sharing, computer networks), the diversification of programming work, and the sophistication of certain applications (psychology, education, etc.), theoretical analyses will lead to the induction of laws or later to the deduction of theorems specific to computing, thus transitioning it from a virtual science to a real one.
The advent of computing has revolutionized the world of technology. It has become indispensable for modern development, establishing itself as a key discipline for all fields of political, economic, and daily human activities.
It should be noted that its evolution as a powerful tool continues to grow rapidly, thus taking an unprecedented place in everyday life.
1- Chronology and Evolution of Computing Systems:
A well-established usage associates the word "informatique" with all the information processing operations carried out using sophisticated electronic machines designed to help humans overcome the classical opposition between competence and performance in their activities. The first machines were indeed intended to compensate for human deficiencies in terms of performance in calculation: for example, solving complex differential equations for meteorology, calculating parameters for the creation of the first atomic bomb, or conducting the U.S. census by handling enormous quantities of data in a relatively short time. These were things people could imagine naturally (competence) but couldn't really accomplish (performance). These machines, which were initially just electromechanical and later electronic calculators (computers), were then repurposed to enhance the performance of the human-machine pair in the broader field of non-numerical data. As early as the beginning of the 1950s, information processing gained recognition, and the calculator became a "data processing machine" (which would soon be called "ordinateur" in France), without any restrictions on the nature of the data being manipulated.
1-1/ Computing Conquers the Business World
Barely out of research laboratories and immediately installed in major universities, mostly American and British (computing technology developed at its beginnings without France), the electronic calculator was quickly directed towards large administrations. This means that very early on (as early as 1951, with Univac), its field of application split into two major areas: scientific, on the one hand, and administrative and commercial, on the other. The machines were enormous, consumed phenomenal amounts of energy, and required specially equipped rooms (air conditioning was an essential environmental element). By 1954, they began to be deployed in large public and private enterprises.
Initially, due to the dominance of "hardware," inherited from mechanography, and because the binary system and its derivatives were the only external codification systems used, the relationship between humans and machines was complex and required specialists. The structure of businesses was immediately modified; a powerful center emerged, the IT department, which quickly grew in importance and became a new center of power. The necessary centralization of resources (computing power and storage capacities) and the concentration of key information about the functioning and strategy of the company gave the managers responsible for implementing this cutting-edge technology a predominant role.
At the same time, new specialized professions appeared, corresponding to the different stages of data processing, from the final user to the machine and back. These technicians acted as intermediaries due to the language incompatibility between humans and machines. This chain of work included:
- The analyst, who translated the needs of the final user into a logical scheme (flowchart) describing the succession of operations needed to achieve a usable result;
- The programmer (or developer), who translated the flowchart into a program written according to the rules of a more or less advanced language (Cobol, Pascal, C, etc.);
- The data entry specialist, who transferred the information (programs and data) onto an appropriate physical medium for electronic processing;
- The preparer, responsible for setting up and checking all the material and logical means necessary for processing (input, output, control, and sequencing data);
- The operator, who monitored the computer and its peripherals during operation.
They were joined by specialists responsible for maintaining the equipment.
This organization revolved around the technological core made up of the central processing unit and the associated peripherals.
The central unit had fairly common characteristics from one manufacturer to another (companies copied each other more or less, while maintaining a level of incompatibility to protect their investments and keep their clients captive). However, it evolved rapidly, becoming more compact and with faster circuits. The storage capacity of its memory systems increased dramatically.
On the side of peripheral devices, specialized equipment was multiplied to diversify the means of exchanging data between the external world and the central unit. What is now called the human-machine interface had been neglected in the beginning. Initially, pre-wired connection boards were used, then a connected typewriter. The computer responded by lighting lamps on the control panel or activating the typewriter. This setup corresponded quite well to the idea of cybernetics that was in vogue at the time, but the process remained impractical, and one had to master the binary numbering system well.
1-2/ The 1960s-1970s: Manufacturers Dominate
By the early 1960s, machines specialized in "input-output" operations were available. They could read and write on punched cards (a legacy of mechanography), punched paper tapes (a legacy of telegraphy), magnetic tapes (a legacy of early tape recorders), or disks (a legacy of gramophones). Printers, though noisy, bulky, and slow, allowed the production of printed reports that were somewhat legible. In terms of system control, the first video screens appeared to display information clearly (or almost clearly) instead of the more complicated oscilloscope outputs.
**By the way, as surprising as it may seem, we realize that computing, at least in its early days, is not an area of innovation: it borrows its lamps from radio, its transistors from the space industry, its screens from television, and its peripherals from various other disciplines... It is an industry that does not really create the tools for its development, but rather diverts objects resulting from the innovative efforts of other industrial sectors. It seems that computing first established itself as a typically liberal industry, characterized indeed by: the exploitation of needs that are more stimulated than real; an explosion of models and families of machines, solely justified by market occupation strategies; and a research that is primarily applied, fitting into a priority logic of short-term profit. Thus, it assembles and exploits rather than innovates. To illustrate this, we recall that the first attempt at commercializing computers was a failure (Mauchly, Univac-I, in 1951), because technological prowess was prioritized over market strategy. I.B.M., which entered the race late, perfectly integrated the commercial dimension of the affair. Until the 1960s, with the 360 series, and aside from the 1401 and its derivatives, the successes of this firm were more commercial than technological.
However, technology then surpasses most users. Despite the development of high-level languages, such as Fortran or Cobol, created by committees composed of user and manufacturer representatives, the knowledge of key elements of information processing eludes many users. Furthermore, machines require quite regular maintenance, which allows manufacturers to profit significantly from a market that is both captive (for technical reasons) and rapidly expanding. As a result, oversized, unsuitable, or unreliable equipment is often installed at customer sites. For instance, we recall the underperformance of the Gamma-60 from Bull or the equipment left unused in the basements of a large French insurance group in the 1960s. Not to mention the total incompatibility of machines, not only from one manufacturer to another but often even among machines from the same company. This is exemplified by I.B.M.'s 1400 series, which necessitated the development of special simulation and/or emulation systems for other I.B.M. products, such as the 707x, 709x, 708x series, etc.
Such was, in broad strokes, the situation of computing in Europe and the United States until the conjunction of two crucial events around the 1980s: the advent of the microcomputer and the end of the era of so-called third-generation computers.
1-3 / The Advent of Microcomputing
The first professional microcomputer was French. In 1975, engineer Truong Trong Thi developed, within his company R2E, the Micral N, exhibiting all the features of future personal computers of the 1980s: microprocessor, universal keyboard, cathode-ray screen, operating system. Due to a lack of financial resources, and despite a brief lineage with the Micral series from Bull, this product did not withstand the upcoming wave of I.B.M. personal computers (PCs). In fact, the revolution is of a different nature. Apple made a splash in 1978 by launching the first family personal computer onto the consumer market, marketed like a vacuum cleaner, a television set, or a tape recorder.
The object is brilliant but primarily aimed at recreational activities, which immediately sparks enthusiasm among the general public. A fierce competition arises... Many companies (Compaq, Amstrad, Atari, etc.) commercialize machines with various purposes that all look alike and are incompatible. Despite I.B.M.'s displayed disdain, a team led by Don Estridge studies the problem. By 1981, I.B.M. launched its first PC, primarily aimed at liberal professions for individual management activities. In contrast to Apple and other manufacturers (who multiply drawings, icons, wingdings, sprites, etc.), which indulge in whimsy, it is rather austere, and the screen is dreary. However, it is a real operating system, possessing an architecture that may be poorly designed but "open," allowing for many peripheral additions and the use of numerous software programs.
From its release, the PC enters the business world. It interests executives greatly due to the possibilities it offers in two favored areas: word processing and spreadsheet-type applications (calculations on data organized in numerical tables). This unexpected event surprises everyone: I.B.M. itself, which must implement a new strategy, as well as the IT directors of companies who see PCs sprouting up everywhere in offices, like mushrooms, in a totally anarchic manner. I.B.M. reacts by adapting its microcomputer to this new market. The PC abandons the cassette and is enhanced with diskette drives with an initial capacity of 160 kilobytes, later increasing to 360. A hard disk (Winchester technology) emerges, with a storage capacity exceeding ten million bytes. The main memory (RAM, random access memory) is increased to 640 kilobytes, with extension possibilities. By 1987, total worldwide sales reach ten million machines. This success, linked to I.B.M.'s notoriety and power, gives the PC a de facto standard status. And the era of clones begins, in an atmosphere of fierce competition marked by the predominance and aggressiveness of Asian manufacturers.
1-4 / The Birth of Networks
Meanwhile, the "big" computing continued its own evolution, leading to the culmination of architectures and structures (machines and organizations) characteristic of the "third generation," i.e., highly centralized, in "large site" environments, with increased storage capacities and relatively heavy management procedures. The types of applications multiply, as do the access requirements to computing resources. Mass memory units proliferate. Tons of paper are produced. The centers are suffocating, as the performance of the raw information input and processed information output does not keep pace with the performance gains of central units.
From the mid-1970s, new access methods are developed, allowing users to communicate directly and remotely with the central computer. New devices take their place in offices, workshops, and behind counters: terminals. Some merely serve as intermediaries for input operations (via the keyboard) and output (via the screen); these are termed "dumb terminals." However, with the development of microprocessors, "intelligent" terminals emerge; they offer some local processing capabilities and herald the near and close decentralization of processing power, which is now the cornerstone of computing structures.
The integration of terminals is one component of what is commonly referred to as the "fourth generation" of computing systems. Naturally, managing these terminals requires the development of new software and new languages, as well as the establishment of a new organization. This results in greater autonomy for users, for whom the concept of "virtual machine" is invented: at any given time, a portion of the resources of the central computer is reserved for the user, who, from their terminal, "sees" the equivalent of a machine that would be exclusively assigned to them. The consequences of this new approach are manifold. First, there is a significant increase in traffic; secondly, application types multiply; and finally, everyone can access the central unit.
We are already faced with a network, whose topology is primarily star-shaped. It is urgent to impose order, materialized by increasingly strict procedures for regulation and control. Not just anyone can access any information at any time. The system, which seemed to have been democratized for a time by the multiplication of means of access to information, generates its own constraints. Restricted access, identifiers, passwords, protected zones, etc., become the obsessions of a new actor, the system administrator, invested with new power, control, for reasons of efficiency, of each user's activities, and tasked with a new responsibility: monitoring the integrity of the company's data and procedures. Because with new access methods, and despite security systems, curious, cunning, and spying individuals begin to infiltrate company data files. It is the era when I.B.M. launches its centralized network architecture concept, SNA (System Network Architecture), where the general scheme of large networks built around a powerful central computer (host) communicating with a myriad of terminals through a "front end" and a number of specialized computers, the communication controllers, begins to take shape. The front end and controllers are tasked with relieving the central system of increasingly complex communication management tasks; in return, the central unit can dedicate itself exclusively to its intended purpose: calculations, data storage, reporting. The communication controllers play a decisive role in the evolution of systems. They induce a reorganization of the information circuits of the company into more or less autonomous domains, but increasingly functionally homogeneous. Moreover, I.B.M.'s attempt is not isolated. By the end of the 1970s, other major manufacturers embark on the adventure, each offering its own network architecture solution: Bull with DSA (Distributed System Architecture), Digital with Decnet, etc. Gradually, a general architecture model emerges, defined obstinately by an international body, the I.S.O. (International Standards Organization), (despite I.B.M.'s obstruction, whose SNA product would long present a structure different from that of the I.S.O.), the famous seven-layer communication model. Each "layer" of software describes a step in the processing of the constituent elements of communication, including physical endpoint components, physical links, logical links, means of transporting information, etc., up to the client application. This marks the beginning of the great era of networks.
1.5 - INTERNET
The Internet is a network ("net") of networks scattered around the world, some of whose services are freely accessible. The Internet also represents a community of users who communicate or exchange emails.
The Internet originated from the ARPANET (Advanced Research Projects Agency Network), created in 1968 by the U.S. Department of Defense to connect its research centers. In 1979, students from Duke University in Durham, North Carolina, had the idea to connect computers to exchange scientific information. From a military phenomenon, then academic, the Internet became, in the United States, the domain of large private companies, small and medium enterprises, and then individuals.
A Global Reach
In 1983, Europe and the rest of the world began connecting to this network of networks, which connected over 2 million computers and more than 30 million users in 146 countries by 1995. By 1993, the Internet had over 45,000 networks and was expanding at a rate of 1,000 new networks per month! However, it took a quarter of a century for the Internet to come to the forefront: the period of 1994-1995 was undoubtedly marked by its "explosion." In a very short time, new services, products, and especially new providers flooded the network. It is worth noting, however, that France still occupies a very modest position in terms of connection to the network.
2. Possibilities of Computing
2.1 COMPUTING AND THE SCIENCES
While the significance of the upheavals caused by computing in an increasing number of social life areas is now recognized, the attention given to the changes it introduces in our understanding of humanity and society—the human sciences—has long remained confined to a small circle of researchers. However, the movement began as early as the 1960s. It not only touches on almost all disciplines, modifying their instrumental and economic infrastructure but also affects methods and even theoretical concepts. For the human sciences, computing is fundamentally the realization of formal thinking.
Computing and the Human Sciences: A Moment of Research
The increasing importance of computing's role in the development of the human sciences is primarily due to functional reasons. The ability of computing devices to capture and widely process information meets a double demand: firstly, that of the scientific project itself to better understand its subject—and the prospect of accumulating an almost limitless amount of information can surely give the illusion of addressing this concern; and secondly, that related to the new working conditions in the human sciences since the late 1950s, characterized, at least for certain disciplines (economics, geography, sociology, law), by a demand coming from administrations and businesses, and, in terms of research organization, by the emergence and rapid development of powerful formations gathering dozens of researchers equipped with means that are unparalleled compared to those available to the traditional scholar. This organization is similar to that which the natural sciences have gradually adopted over the past century.
In this context, the collective project becomes the rule—evident even in disciplines seemingly dedicated to individual scholarship, such as archaeology and linguistics—which favors the accumulation and common availability of information. Moreover, the existence of the previously mentioned demand undoubtedly encourages the researcher to complement the usual discourse with a representation of their subjects that offers—or seems to offer—guarantees of effective understanding. Thus, most databases were created.
Finally, the role of computing in such a profound change in working conditions is inseparable from ideological factors (for example, a particular view of modernity) or certain psychosocial traits that deeply characterize the research world: prestige, intellectual competition, control of information as a factor of power.
But while the use of computing cannot be fully understood outside a certain context, an explanation that reduces itself solely to this external functional pressure would also be quite insufficient. It is easy to verify that the most richly funded research by inter-ministerial bodies in such "sensitive" areas as social behavior, spatial planning, and training and communication devices has often resulted in perfectly traditional discourses regarding their form and methodological elaboration, while disciplines less engaged with the present—and less financially supported, such as history, archaeology, art history, linguistics, and literary studies—have sometimes shown remarkable assimilation of computing techniques and their formal methodological substratum.
Indeed, it is starting to be perceived that the essential issue is of a scientific nature and is linked to the notion of formalization, a notion with which the humanities do not necessarily have less affinity than social sciences. The significant role of computing in the development of the human sciences primarily lies in its association of instrumental dimensions with a theoretical-methodological framework that suggests formal requirements capable of changing, over time, the nature and scope of knowledge in several disciplines.
Computing and Formal Approach in the Human Sciences
Here, computing is understood not only as an unprecedentedly powerful tool for "information processing" but also as a set of formal constructions (languages, codes, algorithms, etc.) that realize methods of analyzing and representing phenomena, from the level of empirical observation to the formulation of formal theories. These methods are of a semiological and linguistic nature, as well as mathematical and logical. It should be noted that for the human sciences, where the problem of the systematic description of phenomena and documents remains without a generally satisfactory solution (unlike the natural sciences), the semiological and linguistic methods associated with computing are certainly complementary to mathematical methods.
The nature of computing intervention can, indeed, be related to an axis that is that of the level of abstraction of the representations being processed, ordered according to a certain "distance" to the represented phenomena. Thus, we will oppose two reference poles: the first will be that of description, as a regulated paraphrase of observation, as a coded transcription of available knowledge (the constitution of data), where the computing embodiment, in the extensive sense we give it here, will take the form of representation languages, "codes," etc.; the second will be that of data structuring, with the aim of "modeling" the phenomena, meaning the highlighting of possible regularities that order the phenomena (or rather their description) and, more broadly, the search for new logical articulations that allow for a more "powerful" explanatory grasp of broader classes of phenomena. In this case, the instruments will generally be mathematical in nature.
The formal constructions obtained through these means can have different logical statuses (heuristic configurations, conjectural propositions, "theorems," etc.) that consequently pose various types of interpretation problems (the return of forms to meaning) and validation (empirical testing). It is through this channel—the definition of the underlying logic and the relationship between the theoretical and the empirical—that the use of computing concretely induces in human sciences research a confrontation between different conceptions of science: on one hand, those arising from the empirico-logical tradition of experimental sciences and formal sciences that underpin computing; on the other hand, those rooted in hermeneutic and dialectical traditions, conceptions that fundamentally support a large part of the human sciences, especially in Europe.
This epistemological confrontation constitutes the reference framework through which the meaning of the ongoing evolution should be clarified; nevertheless, the human sciences have so far primarily resorted to computing for utilitarian purposes, essentially to store and manage information in terms of databases, documentary systems, etc. This recourse may have occurred possibly with the help of sophisticated computing software (automatic extraction of informative content from texts, recognition of sensitive forms, real-time question-and-answer systems, etc.), sometimes controlling technologically advanced instruments (devices for acquiring and restoring written, graphic, or even vocal information, fast satellite memories, etc.). While one should not confuse the technological complexity of means with the theoretical status of results, it should not be underestimated either. The "grid" should enable a better discernment of the different planes on which the relationship between computing and the human sciences is established.
2.2 COMPUTING AND COMMUNICATION
Information is present everywhere in our existence. The most varied messages constantly reach us from all points on the globe, in increasingly diversified forms, to the point that some worry about the scale of this flood, where it is becoming increasingly difficult to discern the essential from the accessory. This hurricane of information coming simultaneously from all fronts places modern man in an awkward situation, leaving him as perplexed as a beggar suddenly owning a palace and a large household; how to reconcile the slowness of the senses, the limited absorption capacity of the brain, and the brevity of life with the terrifying mass of knowledge that could be useful?
It is undoubtedly legitimate to hope that a progress in pedagogy will eventually result from a deeper understanding of psychology; however, it would be futile to believe that such progress could suffice for isolated individuals to grasp more than a very small part of the knowledge currently shared among the many members of the human community. Therefore, the necessity to further increase the potential for dissemination, communication, and collection of past facts and documents appears imperative. Human memory is insufficient; knowledge recorded in books cannot serve anyone if consulting the ever-increasing number of works is not made easier by some more practical and rapid method than searching through a library. Information services using electronic memories have significantly multiplied; already, in multiple applications (stock management, seat reservations, etc.), traditional writing methods have been abandoned. The very pressure of facts compels learning new techniques: transforming an image into a linear message, a table of numbers into a graph, a dispatch into code, a code into plain language; transposing the graph into phonetic form; deciphering the genetic alphabet; providing an objective typological description of fingerprints, pathological syndromes; constructing a model or prototype from diagrams and transitioning from an object to a planar representation without the slow intervention of draftsmen or mechanics.
For all these needs—these transformations where information takes on new faces adapted to recipients using different languages—it is essential that more flexible, faster techniques take over from humans so they can devote more time to essential tasks that machines cannot accomplish.
2-3 / COMPUTING AND EDUCATION
The advent of the microcomputer seemingly revolutionizes the dissemination of computing to the public, particularly in computer-assisted education. However, it is essential to remain critically honest in this regard. In theory, the microcomputer allows for the relatively inexpensive purchase of individual hardware, such that the family computer, possibly intended for education, is no longer a dream. Nevertheless, the purchase costs remain tied to memory capacity, quality, and the quantity of software. Thus, it is relatively affordable to consider using a microcomputer without peripherals for a simple computer-assisted education (C.A.E.) based on basic software. In contrast, utilizing peripherals and software that enable more effective pedagogical strategies, which occupy more memory space and require additional memory, becomes much more expensive, as the cost of these software represents a significant additional expense. Therefore, the emergence of microcomputers will not eliminate the competition that exists between simple C.A.E. on basic support and complex C.A.E. on relatively expensive support.
Educational software employs the various programming languages provided by manufacturers on the hardware. Classic programming languages are generally not directly usable for writing courses. Consequently, computer scientists and educators have sought to develop specific educational software that allows for relatively easy writing of courses of varying complexity. Among these educational software, some aim for a certain universality, meaning they are designed to allow the writing of courses of any type in any discipline. Notable examples include the Tutor language from the Plato system abroad, and in France, the authoring languages from O.P.E. (University of Paris-VII) and the "T.P." software from the University of Paris-V. Other software aims to enable authors to rapidly create courses requiring very specific pedagogical strategies, such as the diagnostic training from the University of Paris-V. One of the significant current issues stems from the persistent relationship between specific software and the brand of the computer it was written on. This results in considerable difficulties in transferring software and courses created on one hardware to another, with such transfers always being costly in terms of time and money. Therefore, educators dream of having educational software with as wide applications as possible, available on a sufficient number of different brand machines to allow for easy transfers. The development of such software is currently being attempted in France. However, highly specialized software will still be necessary to conduct specific teachings, particularly those involving simulation.
2-4 / Simulation Methods
Increasingly used by biologists and physicians, simulation methods have become one of the fundamental tools of quantitative biology. To simulate means to replace a complex real phenomenon with a simpler "constructed" phenomenon, a model. It has long been the case that physicists or engineers simulate, for example, the vibrations of an airplane wing or the functioning of a dam using electrical setups. Whether exploited on an analog computer or a digital computer, a model translates a formalized set of precise quantitative hypotheses, most often in the form of a system of differential equations. The behavior of the model is then compared with what is known about the real system. The quantitative similarity of behaviors does not "prove" that the model corresponds to reality (several models, based on very different principles, can behave similarly overall), but a clear deviation between the two behaviors highlights the falsity or inadequacy of the hypotheses made, a certainty that could not be acquired otherwise. This is how researchers "artificially" experiment with models of endocrine, neural, biochemical regulation, etc. Builders and users develop programming systems that significantly ease and enhance the phases of programming and numerical exploitation of this simulated experimentation, allowing the biologist and physician to devote all their time and attention to real experimentation and the formalization of their interpretations.
2-5 / Electronics and Computing in Armaments
This brief summary would be insufficient without a few words about the recent and rapid "invasion" of electronics and computing in contemporary armaments. Nowadays, for "cutting-edge" weapon systems and their implementation environment, the share of electronics and computing can represent more than half of the equipment's cost. These techniques offer new and considerable capabilities, but they come with – pun intended – a staggering increase in production costs, especially if research and development expenses are not spread across very large series. As a result, only the major powers can produce this type of equipment, while "emerging" industries (China, Singapore, Brazil...) offer simple, low-cost equipment thanks to low labor costs. European nations are and will increasingly be affected by this evolution.
3 - COMPUTING AND PERSPECTIVES:
The first wave of computing relied on large systems. The second wave generalized the use of personal computers. The third wave, which we are witnessing today, is characterized not by a radical change in the design of computers but by an evolution of interconnections.
3-1 / In the Field of Education
Computer-assisted education is thus already an indispensable means of pedagogical experimentation. In the future, the positive nature of recent experiences allows us to assert that computers will have a place in education. However, it is possible that this place will not quantitatively surpass that of many other media available to educators. The most significant applications are not those where computers simply replace teachers, but those where they demonstrate different capabilities. In primary and secondary education, it seems that less the C.A.E. in the strict sense is interesting than the awareness of computing. In higher education, as well as for adult retraining, computers are mainly useful for learning skills through corrected simulations (simulated acquisition of experience). Regardless of the application, it is essential to emphasize that computers must be integrated into the overall teaching process, just like books, lectures, slide projectors, and television.
Cost is the primary limitation to the expansion of this education, but it is difficult to estimate. Indeed, several financial aspects must be considered: the purchase or rental of equipment and its maintenance, the development of software, the writing of courses and the modifications teachers make to them, and operational costs. This cost naturally depends on the choice of equipment and structures. It seems important to highlight that inexpensive equipment, with relatively limited performance, restricts C.A.E. to its simplest aspects and is not necessarily the most profitable in the long term. Experience has shown that overly simplistic software, which does not allow for complex and especially varied pedagogical strategies, generates boredom, regardless of the teacher's quality. Thus, we see the dilemma that the development of C.A.E. poses for public authorities worldwide: to study its profitability, it must be experimented on a large scale; to obtain a statistically valid and unbiased answer, sufficiently performing software is required. This represents considerable investments.
The feasibility of computer-assisted education is now demonstrated. Its usefulness, when it complements more traditional forms of education, is beyond doubt. Its application as a stimulator of creativity raises hopes that need to be confirmed. Its role in pedagogical research seems likely to remain significant for many years. Its future in specific areas appears assured, provided that society accepts to bear the additional costs it represents.
3-2 / AUTOMATION
Celebrating the benefits or denouncing the drawbacks of automation in its various forms is one of the favorite pastimes of the media world. As a result, the "average person" does not lack sources of information; however, is it easy for them to synthesize the knowledge they have gathered? Sometimes the term automation applies to improvements in certain household appliances or refinements in car gearboxes, while at other times it refers to industrial manufacturing processes. Occasionally, it pertains to the processes that occur, almost without human intervention, when a satellite is launched and subsequently commercially exploited; or it designates the use of machines in the operations of booking and selling airplane tickets.
During the 1970s, robots ceased to be a fairground curiosity and entered factories to replace workers by replicating their primary gestures. Soon, they will possess artificial intelligence and will be able to recognize their environment, determine certain of their movements, and perfect their own behaviors.
What common points connect the techniques thus implemented? It is unnecessary to emphasize the existence of a double language: that of technicians, inaccessible to the general public, and the accessible yet distorted language of those who want to create sensationalism, for example by using the term automation or a related term in advertising for commercial motives.
In its modern sense, the word automation and its derivatives refer to complex techniques or processes rather than improvements to simple, common devices. Moreover, practical implementations in this field have largely preceded theoretical studies. It is only through the industrial effort spurred by World War II that the opportunity arose to establish the fundamental disciplines of a mathematical nature. These continue to enrich and benefit simultaneously from work dedicated to information processing – the specific domain of computing – on which they constantly rely. The entire body of theoretical knowledge thus acquired is perfectly coherent; it constitutes the science of automation.
Automation and Human Intervention
Automation is a technique or a set of techniques aimed at reducing or eliminating the need for human operators in a process where this intervention was customary. There is obviously no automation when human operators are replaced by animal power, nor when an artificial process is substituted for a natural process. Automation refers solely to a transformation of processes that are exclusively created by humans: techniques or a set of techniques. It thus aims to economize human intervention in all its forms (apart from energy input).
Therefore, automation can apply to processes that do not involve any appreciable physical energy: detection; control and measurement; real-time calculations, meaning as the process unfolds, to ensure its management; real-time management of a process to strictly control its economy; diagnostics; pattern recognition, meaning identification based on multiple criteria. Automation can reach various levels of complexity, which have been classified.
A characteristic of modern automation is that it does not always aim to ensure the direct control of processes but sometimes only to provide decision support. This is the role of what we call artificial intelligence systems, which, if they have a specific vocation, are termed expert systems. These artificial intelligence systems are rapidly spreading in design assistance, diagnostics, maintenance, as well as in all areas where a significant amount of information needs to be processed to allow for decision-making.
Automation in Education
Automation in education is a concept that is often confused with computer-assisted education (C.A.E.), and it is also worth noting that automation was initially concerned with schools or higher education establishments before extending its influence. This is in part due to the use of certain systems based on the emergence of computer tools. However, the utilization of machines to prepare learning and assessment is a practical concern that has existed long before the advent of computers.
In this field, the French word "automatisation" is employed to signify the automatic processing of the so-called "long" operations. This is one of the oldest methods of technological intervention in education and has been accompanied by notable awareness of various social dynamics. Its consequences, as indicated, affect numerous levels of our teaching systems.
The acceptance of the idea that certain processes can be automated remains weak, and this situation illustrates a cultural evolution, particularly among those who have an active role in education.
The underlying rationale for automating education involves an increasing desire for mass production in a society that increasingly depends on the availability of training, information, and knowledge. The principle of automation will certainly bring significant changes, even though it is not without a fair number of obstacles.
The first-generation robot repeats the gestures that have been taught to it or that have been programmed; it knows nothing about its environment. The second-generation robot is capable of acquiring some data about its environment through embryonic "senses"; the third-generation robot will understand a language close to oral language and will be able to solve real problems after having perfected its own skills. Robots of this type will be true expert systems, working in real time.
Ultimately, the dimensions of the field of computing remain an immense space, extending toward an unlimited infinity. As a result, it is impossible to fully grasp the perspectives of this development tool in any way.
Conclusion
Computing has revolutionized technology by providing access to all areas unequivocally. Its evolution continues to surprise us because its power has allowed the conquest of new horizons. Nevertheless, it remains a very dangerous tool that could put humanity at risk.