Information

1.1: Context - Biology


Command Lines and Operating Systems

Many operating systems, including Microsoft Windows and Mac OS X, include a command line interface (CLI) as well as the standard graphical user interface (GUI). In this book, we are interested mostly in command line interfaces included as part of an operating system derived from the historically natural environment for scientific computing, Unix, including the various Linux distributions (e.g., Ubuntu Linux and Red Hat Linux), BSD Unix, and Mac OS X.

Even so, an understanding of modern computer operating systems and how they interact with the hardware and other software is useful. An operating system is loosely taken to be the set of software that manages and allocates the underlying hardware—divvying up the amount of time each user or program may use on the central processing unit (CPU), for example, or saving one user’s secret files on the hard drive and protecting them from access by other users. When a user starts a program, that program is “owned” by the user in question. If a program wishes to interact with the hardware in any way (e.g., to read a file or display an image to the screen), it must funnel that request through the operating system, which will usually handle those requests such that no one program may monopolize the operating system’s attention or the hardware.

The figure above illustrates the four main “consumable” resources available to modern computers:

  1. The CPU. Some computers have multiple CPUs, and some CPUs have multiple processing “cores.” Generally, if there are n total cores and k programs running, then each program may access up to n/k processing power per unit time. The exception is when there are many processes (say, a few thousand); in this case, the operating system must spend a considerable amount of time just switching between the various programs, effectively reducing the amount of processing power available to all processes.
  2. Hard drives or other “persistent storage.” Such drives can store ample amounts of data, but access is quite slow compared to the speed at which the CPU runs. Persistent storage is commonly made available through remote drives “mapped in” over the network, making access even slower (but perhaps providing much more space).
  3. RAM, or random access memory. Because hard drives are so slow, all data must be copied into the “working memory” RAM to be accessed by the CPU. RAM is much faster but also much more expensive (and hence usually provides less total storage). When RAM is filled up, many operating systems will resort to trying to use the hard drive as though it were RAM (known as “swapping” because data are constantly being swapped into and out of RAM). Because of the difference in speed, it may appear to the user as though the computer has crashed, when in reality it is merely working at a glacial pace.
  4. The network connection, which provides access to the outside world. If multiple programs wish to access the network, they must share time on the connection, much like for the CPU.

Because the software interfaces we use every day—those that show us our desktop icons and allow us to start other programs—are so omnipresent, we often think of them as part of the operating system. Technically, however, these are programs that are run by the user (usually automatically at login or startup) and must make requests of the operating system, just like any other program. Operating systems such as Microsoft Windows and Mac OS X are in reality operating systems bundled with extensive suites of user software.

A Brief History

The complete history of the operating systems used by computational researchers is long and complex, but a brief summary and explanation of several commonly used terms and acronyms such as BSD, “open source,” and GNU may be of interest. (Impatient readers may at this point skip ahead, though some concepts in this subsection may aid in understanding the relationship between computer hardware and software.)

Foundational research into how the physical components that make up computing machinery should interact with users through software was performed as early as the 1950s and 1960s. In these decades, computers were rare, room-sized machines and were shared by large numbers of people. In the mid-1960s, researchers at Bell Labs (then owned by AT&T), the Massachusetts Institute of Technology, and General Electric developed a novel operating system known as Multics, short for Multiplexed Information and Computing Service. Multics introduced a number of important concepts, including advances in how files are organized and how resources are allocated to multiple users.

In the early 1970s, several engineers at Bell Labs were unhappy with the size and complexity of Multics, and they decided to reproduce most of the functionality in a slimmed-down version they called UNICS—this time short for Uniplexed Information and Computing Service—a play on the Multics name but not denoting a major difference in structure. As work progressed, the operating system was renamed Unix. Further developments allowed the software to be easily translated (or ported) for use on computer hardware of different types. These early versions of Multics and Unix also pioneered the automatic and simultaneous sharing of hardware resources (such as CPU time) between users, as well as protected files belonging to one user from others—important features when many researchers must share a single machine. (These same features allow us to multitask on modern desktop computers.)

During this time, AT&T and its subsidiary Bell Labs were prohibited by antitrust legislation from commercializing any projects not directly related to telephony. As such, the researchers licensed, free of cost, copies of the Unix software to any interested parties. The combination of a robust technology, easy portability, and free cost ensured that there were a large number of interested users, particularly in academia. Before long, many applications were written to operate on top of the Unix framework (many of which we’ll use in this book), representing a powerful computing environment even before the 1980s.

In the early 1980s, the antitrust lawsuit against AT&T was settled, and AT&T was free to commercialize Unix, which they did with what we can only presume was enthusiasm. Unsurprisingly, the new terms and costs were not favorable for the largely academic and research-focused user base of Unix, causing great concern for many so heavily invested in the technology.

Fortunately, a group of researchers at the University of California (UC), Berkeley, had been working on their own research with Unix for some time, slowly reengineering it from the inside out. By the end of AT&T’s antitrust suit, they had produced a project that looked and worked like AT&T’s Unix: BSD (for Berkeley Systems Distribution) Unix. BSD Unix was released under a new software license known as the BSD license: anyone was free to copy the software free of charge, use it, modify it, and redistribute it, so long as anything redistributed was also released under the same BSD license and credit was given to UC Berkeley (this last clause was later dropped). Modern versions of BSD Unix, while not used heavily in academia, are regarded as robust and secure operating systems, though they consequently often lack cutting-edge or experimental features.

In the same year that AT&T sought to commercialize Unix, computer scientist Richard Stallmann responded by founding the nonprofit Free Software Foundation (FSF), which was dedicated to the idea that software should be free of ownership, and that users should be free to use, copy, modify, and redistribute it. He also initiated the GNU operating system project, with the goal of re-creating the Unix environment under a license similar to that of BSD Unix. (GNU stands for GNU’s Not Unix: a recursive, self-referencing acronym exemplifying the peculiar humor of computer scientists.)

The GNU project implemented a licensing scheme that differed somewhat from the BSD license. GNU software was to be licensed under terms created specifically for the project, called the GPL, or GNU Public License. The GPL allows anyone to use the software in any way they see fit (including distributing for free or selling any program built using it), provided they also make available the human-readable code that they’ve created and license it under the GPL as well (the essence of “open source”[1]). It’s as if the Ford Motor Company gave away the blueprints for a new car, with the requirement that any car designed using those blueprints also come with its own blueprints and similar rules. For this reason, software written under the GPL has a natural tendency to spread and grow. Ironically and ingeniously, Richard Stallmann and the BSD group used the licensing system, generally intended to protect the spread of intellectual property and causing the Unix crisis of the 1980s, to ensure the perpetual freedom of their work (and with it, the Unix legacy).

While Stallmann and the FSF managed to re-create most of the software that made up the standard Unix environment (the bundled software), they did not immediately re-create the core of the operating system (also called the kernel). In 1991, computer science student Linus Torvalds began work on this core GPL-licensed component, which he named Linux (pronounced “lin-ucks,” as prescribed by the author himself). Many other developers quickly contributed to the project, and now Linux is available in a variety of “distributions,” such as Ubuntu Linux and Red Hat Linux, including both the Linux kernel and a collection of Unix-compatible GPL (and occasionally non-GPL) software. Linux distributions differ primarily in what software packages come bundled with the kernel and how these packages are installed and managed.

Today, a significant number of software projects are issued under the GPL, BSD, or similar “open” licenses. These include both the Python and R projects, as well as most of the other pieces of software covered in this book. In fact, the idea has caught on for noncode projects as well, with many documents (including this one) published under open licenses like Creative Commons, which allow others to use materials free of charge, provided certain provisions are followed.



1.1 Sustainability Definitions

The term sustainability has a multidisciplinary use and meaning. In dictionaries, sustainability is typically described by many sources as a capability of a system to endure and maintain itself. Various disciplines may apply this term differently.

In history of humankind, the concept of sustainability was connected to human-dominated ecological systems from the earliest civilizations to the present. A particular society might experience a local growth and developmental success, which may be followed by crises that were either resolved, resulting in sustainability, or not resolved, leading to decline.

In ecology, the word sustainability characterizes the ability of biological systems to remain healthy, diverse, and productive over time. Long-lived and healthy wetlands and forests are examples of sustainable biological systems.

Since the 1980s, sustainability as a term has been used more in the sense of human sustainability on planet Earth and this leads us to the concept of sustainable development which is defined by the Brundtland Commission of the United Nations (March 20, 1987) as follows: "Sustainable development is development that meets the needs of the present without compromising the ability of future generations to meet their own needs." The following video will further elaborate on this definition and will give a few examples on its meaning.

With human decision-making involved, sustainability attains a significant ethical aspect and transforms social paradigm on success, growth, profit, standards of living. This reevaluation requires broader and more synergistic overview of many components of anthropological ecosystems, including technology.

The topic of sustainable development gained enough importance in the last few decades of the 20th century to become a central discussion point at the 1987 General Assembly of United Nations (UN). Concerned by the quick deterioration of the human environment, uneven development, poverty, population growth, extreme pressure on planet's land, water, forest, and other natural resources, UN issued an urgent call to the World Commission on Environment and Development to formulate a "global agenda for change" [UN, 1987]. The result of this action was the report "Our Common Future", which further served as the global guideline for world's nations in formulating their political and economic agenda. This document is almost 40 years old now and was followed up by a long array of actions and movements in subsequent years. But let us go back for a little bit and see how it all started.

The original 1987 Report prepared by the World Commission on Environment and Development is a big document (over 300 pages), so I do not advise you to read it all right away. The following reading (about 16 pages) is Chapter 2 of the report, which talks specifically about the concept of sustainable development. So, some of the terms, definitions, and perspectives outlined there will be especially useful for our further work and discussions in this course. So, here is your first reading assignment:

Reading Assignment:

This document summarizes a consensus on sustainable development and outlines the strategies that should enable reaching sustainability goals. Adopted in 1987, it formed the background for many future attempts to formulate the sustainability principles in very diverse areas: science, industry, economics. Reading through this chapter will provide you with the important background on how the sustainability movement began and what issues were the drivers of sustainable thinking four decades ago.

While reading, take a note of the concept of growth, how it is interpreted, and what positive and negative implications are associated with it. This context will be helpful further in this lesson as we go on to analyze and discuss the question of growth on the forum.

Three Pillars of Sustainability

Sustainable development involves environmental, economic, and social aspects. For a particular process to be sustainable, it should not cause irreversible change to the environment, should be economically viable, and should benefit society. One of the example illustrations of the interplay among these three spheres is provided below in Figure 1.1. Sustainability is represented as the synergy between society, economics, and environment. The environmental aspects include use of natural resources, pollution prevention, biodiversity, and ecological health. The social aspects include standards of living, availability of education and jobs, and equal opportunities for all members of society. The economic factors are drivers for growth, profit, reducing costs, and investments into research and development, etc. There are more factors that will affect sustainability of a social system - these few are listed as examples. Interaction of social and economic spheres result in formulation of combined social-economic aspects. Those are, for instance, business ethics, fair trade, and worker's benefits. At the same time, combination of economic and environmental interests facilitate increasing energy efficiency, development of renewable fuels, green technologies, and also creation of special incentives and subsidies for environmentally sound businesses. Intersection of social and environmental spheres lead to creation of conservation and environmental protection policies, establishment of environmental justice, and global stewardship for sustainable use of natural resources. This framework is in some way a simplification, but it proved to be helpful in identifying key areas of impact and set the basis for objective analysis. Further in this course particular processes and technologies will be often evaluated in terms of social, economic, and environmental impacts, although we should understand that those three pillars are never fully isolated from one another.

Dimensions of Sustainability

The above-mentioned three pillars of sustainability are very common terms in the literature, media, and communications and convey a simple idea to grasp. However, the interconnections between these three pillars are not at all simple and can actually occur in different planes of thinking. Three fundamental meanings or dimensions of sustainability were defined by Christian Becker in his book "Sustainability Ethics and Sustainability Research" as continuance, orientation, and relationships. To understand what those dimensions exactly mean, please refer to the following reading. As discussed in this chapter, the multi-dimensional nature of sustainability is something that often results in confusion and miscommunication between different entities and spheres involved. For example, an environmentalist, economist, and politician can discuss sustainability as a project goal, but actually having three different goals in mind. So, new project developers in the sustainability era should certainly seek to broaden their perspective and at the same time develop sufficient depth in articulation of their sustainability vision. Enjoy the reading:

Reading Assignment:

Book chapter: C.U. Becker, Sustainability Ethics and Sustainability Research, Chapter 2: Meaning of Sustainability, Springer 2012, pages 9-15. (Available through E-Reserves in Canvas.)

When reading, pay special attention to the various dimensions of sustainability and why they need to be recognized. Think – how would you define the term "sustainability" in your own words?

Check Your Understanding - Reflection Point

Now as you have read C. Becker's text, think which of the three meanings of sustainability mentioned is the closest to your mindset. When you hear people talking about sustainable economy, or sustainable society, what comes to your mind first? Also reflect what dimension of sustainability has been lacking from your vision. Do you agree with the author of the chapter that all three dimensions must be equally included in discussion?

Write a few sentences summarizing your thoughts and keep them in your notes. You may need to go back and use your reflection later in the introduction or discussion in your course project.


Note: this is ungraded assignment - you are making this reflection solely for your own reference.

If you completed the short reflection note in the box above - good job! You will find it very beneficial to write down some of your own thoughts while you are still fresh off your reading.

Further in this course, we will occasionally revisit the definitions and interpretations of sustainability. This is one of the concepts that sets context for our main focus in this course - technology role and assessment. In the next section of this lesson, we will start seeing how technology is sometimes considered the cornerstone of the society development and survival. While some theories heavily bet on technology as the universal solution to society's ever-growing needs, others are much more skeptical. So, prepare for some controversy.

Supplemental reading on sustainable development

This document provides a more detailed outline of the goals of the global community for sustainable development. You are not required to read the entire document, but it may be interesting to scan through it and see how it follows up on the initial guidelines adopted in 1987.


Social Context, Biology, and the Definition of Disorder

In recent years, medical sociologists have increasingly paid attention to a variety of interactions between social and biological factors. These include how social stressors impact the functioning of physiological systems, how sociocultural contexts trigger genetic propensities or mitigate genetic defects, and how brains are attuned to social, cultural, and interactional factors. This paper focuses on how both sociocultural and biological forces influence what conditions are contextually appropriate responses or disorders. It also suggests that some of the most obdurate health problems result from mismatches between natural genes and current social circumstances rather than from genetic defects. Finally, it examines how social environments have profound impacts on how much harm disorders create. It shows how sociological insights can help establish valid criteria for illnesses and indicates the complexities involved in defining what genuine disorders are.

Keywords: biology genes and environments mental disorder social context.


Which context?

Genetic safeguards are biocontainment and control mechanisms that would be theoretically applicable to a variety of host cells, irrespectively of their application. For example, a synthetic phosphite auxotrophy has been successfully implemented in both Escherichia coli and cyanobacteria to drastically reduce the risk that these cells could escape and survive in an environment without phosphite (Motomura et al, 2018 ). Another example is genome recoding, which prevents horizontal gene transfer and which has been fully or partially achieved in E. coli, Salmonella typhimurium and yeast (Kuo et al, 2018 ). Such genetic safeguards are what Mampuys and Brom ( 2018 ) describe as “horizontal integration” of biotechnology, namely the development of techniques that are increasingly more versatile and less application-specific.

Genetic safeguards are designed as “plug-ins”, that can be implemented in different chassis. In reality though, they are far less universally applicable than this vision would suggest. To begin with, current laboratory practice still develops genetic safeguards in and for specific organisms, with E. coli and yeast being the dominant ones. The effort required to implement an existing strategy in a new species is not neglectable. For instance, the genome editing tools to recode the genome are much less developed in S. typhimurium or other bacteria than they are in E. coli. Second, different chassis are not interchangeable owing to the physiological particularities of each species. For example, the engineering required for a synthetic phosphite auxotrophy depends on the specific phosphorus transport system of the species, which might consist of different transporters and transport complexes. Moreover, specific species are more suitable for certain scenarios or environments: soil bacteria, gut microbiota, or phototrophic organisms. Correspondingly, it is reasonable to expect that application contexts will dictate the use of a specific biological chassis (de Lorenzo et al, 2021 ).

There is a wide range of potential applications of synthetic biology, from medical to industrial to agricultural and environmental, that would benefit from genetic safeguards. Nonetheless, most gains are expected in novel and pervasive applications that are currently considered too risky, such as medical applications for human use or uncontained environmental applications for bioremediation (Moe-Behrens et al, 2013 Schmidt & de Lorenzo, 2016 ). Clearly, such applications vary greatly in their characteristics, from their deployment site and ecological scale of intervention to their perceived benefits and the human practices they will affect.

Given the diversity of scenarios in which genetic safeguards may be applicable, it is striking that they are designed, evaluated, and regulated independent of their context. What is more, it is highly unlikely that the same strategies or combinations thereof will be equally useful, relevant, or desirable across such grossly diverse contexts. After all, seat belts in cars are distinctively different from seat belts in racing cars, airplanes, or rollercoasters. The aforementioned phosphite auxotrophy, for example, makes perfect sense as a biocontainment strategy for agricultural applications but may not be the best choice in biomedical settings. As we are about to see, contextualization offers tangible benefits to capture and respond to the particularities of each context and the corresponding needs and interests of stakeholders.


Contents

Teleology Edit

Teleology, from Greek τέλος, telos "end, purpose" [3] and -λογία, logia, "a branch of learning", was coined by the philosopher Christian von Wolff in 1728. [4] The concept derives from the ancient Greek philosophy of Aristotle, where the final cause (the purpose) of a thing is its function. [5] However, Aristotle's biology does not envisage evolution by natural selection. [6]

Phrases used by biologists like "a function of . is to . " or "is designed for" are teleological at least in language. The presence of real or apparent teleology in explanations of natural selection is a controversial aspect of the philosophy of biology, not least for its echoes of natural theology. [1] [7]

Natural theology Edit

Before Darwin, natural theology both assumed the existence of God and used the appearance of function in nature to argue for the existence of God. [9] [10] The English parson-naturalist John Ray stated that his intention was "to illustrate the glory of God in the knowledge of the works of nature or creation". [8] Natural theology presented forms of the teleological argument or argument from design, namely that organs functioned well for their apparent purpose, so they were well-designed, so they must have been designed by a benevolent creator. For example, the eye had the function of seeing, and contained features like the iris and lens that assisted with seeing therefore, ran the argument, it had been designed for that purpose. [9] [10] [11]

Goal-directed evolution Edit

Religious thinkers and biologists have repeatedly supposed that evolution was driven by some kind of life force, a philosophy known as vitalism, and have often supposed that it had some kind of goal or direction (towards which the life force was striving, if they also believed in that), known as orthogenesis or evolutionary progress. Such goal-directedness implies a long-term teleological force some supporters of orthogenesis considered it to be a spiritual force, while others held that it was purely biological. For example, the Russian embryologist Karl Ernst von Baer believed in a teleological force in nature, [12] [13] whereas the French spiritualist philosopher Henri Bergson linked orthogenesis with vitalism, arguing for a creative force in evolution known as élan vital in his book Creative Evolution (1907). [14] The French biophysicist Pierre Lecomte du Noüy and the American botanist Edmund Ware Sinnott developed vitalist evolutionary philosophies known as telefinalism and telism respectively. Their views were heavily criticized as non-scientific [15] the palaeontologist George Gaylord Simpson argued that Du Noüy and Sinnott were promoting religious versions of evolution. [16] The Jesuit paleontologist Pierre Teilhard de Chardin argued that evolution was aiming for a supposed spiritual "Omega Point" in what he called "directed additivity". [17] [18] With the emergence of the modern evolutionary synthesis, in which the genetic mechanisms of evolution were discovered, the hypothesis of orthogenesis was largely abandoned by biologists, [19] [20] especially with Ronald Fisher's argument in his 1930 book The Genetical Theory of Natural Selection. [21]

Natural selection Edit

Natural selection, introduced in 1859 as the central mechanism [a] of evolution by Charles Darwin, is the differential survival and reproduction of individuals due to differences in phenotype. [23] The mechanism directly implies evolution, a change in heritable traits of a population over time. [24]

Adaptation Edit

A trait which persists in a population is often assumed by biologists to have been selected for in the course of evolution, raising the question of how the trait achieves this. Biologists call any such mechanism the function of the trait, using phrases like "A function of stotting by antelopes is to communicate to predators that they have been detected", [1] or "The primate hand is designed (by natural selection) for grasping." [25]

An adaptation is an observable structure or other feature of an organism (for example, an enzyme) generated by natural selection to serve its current function. A biologist might propose the hypothesis that feathers are adaptations for bird flight. That would require three things: that the trait of having feathers is heritable that the trait does serve the function of flight and that the trait increases the fitness of the organisms that have it. Feathers clearly meet these three conditions in living birds. However, there is also a historical question, namely, did the trait arise at the same time as bird flight? Unfortunately for the hypothesis, this seems not to be so: theropod dinosaurs had feathers, but many of them did not fly. [26] [27] Feathers can be described as an exaptation, having been co-opted for flight but having evolved earlier for another purpose such as insulation. Biologists may describe both the co-option and the earlier adaptation in teleological language. [26] [28] [29]

Reasons for discomfort Edit

Apparent teleology is a recurring issue in evolutionary biology, [30] much to the consternation of some writers, [31] and as an explanatory style it remains controversial. [31] There are various reasons for discomfort with teleology among biologists. [1] [32]

Firstly, the concept of adaptation is itself controversial, as it can be taken to imply, as the evolutionary biologists Stephen J. Gould and Richard Lewontin argued, that biologists agree with Voltaire's Doctor Pangloss in his 1759 satire Candide that this is "the best of all possible worlds", in other words that every trait is perfectly suited to its functions. [33] However, all that evolutionary biology requires is the weaker claim that one trait is at least slightly better in a certain context than another, and hence is selected for. [1]

Secondly, teleology is linked to the pre-Darwinian idea of natural theology, that the natural world gives evidence of the conscious design and beneficent intentions of a creator, as in the writings of John Ray. [1] William Derham continued Ray's tradition with books such as his 1713 Physico-Theology and his 1714 Astro-Theology. [34] They in turn influenced William Paley who wrote a detailed teleological argument for God in 1802, Natural Theology, or Evidences of the Existence and Attributes of the Deity collected from the Appearances of Nature, [35] starting with the Watchmaker analogy. [36] Such creationism, along with a vitalist life-force and directed orthogenetic evolution, has been rejected by most biologists. [1]

Thirdly, attributing purposes to adaptations risks confusion with popular forms of Lamarckism where animals in particular have been supposed to influence their own evolution through their intentions, though Lamarck himself spoke rather of habits of use, and the belief that his thinking was teleological has been challenged. [37] [38] [39]

Fourthly, the teleological explanation of adaptation is uncomfortable because it seems to require backward causation, in which existing traits are explained by future outcomes because it seems to attribute the action of a conscious mind when none is assumed to be present in an organism and because, as a result, adaptation looks impossible to test empirically. [1]

A fifth reason concerns students rather than researchers: Gonzalez Galli argues that since people naturally imagine that evolution has a purpose or direction, then the use of teleological language by scientists may act as an obstacle to students when learning about natural selection. Such language, he argues, should be removed to make teaching more effective. [40]

Removable teleological shorthand Edit

Statements which imply that nature has goals, for example where a species is said to do something "in order to" achieve survival, appear teleological, and therefore invalid to evolutionary biologists. It is however usually possible to rewrite such sentences to avoid the apparent teleology. Some biology courses have incorporated exercises requiring students to rephrase such sentences so that they do not read teleologically. Nevertheless, biologists still frequently write in a way which can be read as implying teleology, even though that is not their intention. [41] John Reiss argues that evolutionary biology can be purged of apparent teleology by rejecting the pre-Darwinian watchmaker analogy for natural selection [41] [42] other arguments against this analogy have also been promoted by writers such as the evolutionary biologist Richard Dawkins. [43]

Some philosophers of biology such as James G. Lennox have argued that Darwin was a teleologist, [44] while others like Michael Ghiselin described this claim as a myth promoted by misinterpretations of his discussions, and emphasized the distinction between using teleological metaphors and actually being teleological. [45] Michael Heads, on the other hand, describes a change in Darwin's thinking about evolution that can be traced from the first volume of On the Origin of Species to later volumes. For Heads, Darwin was originally a far more teleological thinker, but over time, "learned to avoid teleology." Heads cites a letter Darwin wrote in 1872, in which he downplayed the role of natural selection as a causal force on its own in explaining biological adaptation, and instead gave more weight to "laws of growth," that operate [without the aid of natural selection]. [46]

Andrew Askland, from the Sandra Day O'Connor College of Law claims that unlike transhumanism, an ideology that aims to improve the human condition, which he asserts is "wholly teleological", Darwinian evolution is not teleological. [47]

Various commentators view the teleological phrases used in modern evolutionary biology as a type of shorthand for describing any function which offers an evolutionary advantage through natural selection. For example, the zoologist S. H. P. Madrell wrote that "the proper but cumbersome way of describing change by evolutionary adaptation [may be] substituted by shorter overtly teleological statements" for the sake of saving space, but that this "should not be taken to imply that evolution proceeds by anything other than from mutations arising by chance, with those that impart an advantage being retained by natural selection." [48]

Irreducible teleology Edit

Other philosophers of biology argue instead that biological teleology is irreducible, and cannot be removed by any simple process of rewording. Francisco Ayala specified three separate situations in which teleological explanations are appropriate. First, if the agent consciously anticipates the goal of their own action for example the behavior of picking up a pen can be explained by reference to the agent's desire to write. Ayala extends this type of teleological explanation to non-human animals by noting that A deer running away from a mountain lion. . . has at least the appearance of purposeful behavior." [49] Second, teleological explanations are useful for systems that have a mechanism for self-regulation despite fluctuations in environment for example, the self-regulation of body temperature in animals. Finally, they are appropriate "in reference to structures anatomically and physiologically designed to perform a certain function. " [49]

Ayala, relying on work done by philosopher Ernest Nagel, also rejects the idea that teleological arguments are inadmissible because they cannot be causal. For Nagel, teleological arguments must be consistent because they can always be reformulated as non-teleological arguments. The difference between the two is, for Ayala, merely one of emphasis. Nagel writes that while teleological arguments focus on "the consequences for a given system of a constituent part or process," the equivalent non-teleological arguments focus on ""some of the conditions . under which the system persists in its characteristic organization and activities." [50] However, Francisco Ayala argued that teleological statements are more explanatory and cannot be disposed of. [51] [52] Karen Neander similarly argued that the modern concept of biological 'function' depends on natural selection. So, for example, it is not possible to say that anything that simply winks into existence, without going through a process of selection, actually has functions. We decide whether an appendage has a function by analysing the process of selection that led to it. Therefore, Neander argues, any talk of functions must be posterior to natural selection, function must be defined by reference to the history of a species, and teleology cannot be avoided. [53] The evolutionary biologist Ernst Mayr likewise stated that "adaptedness . is an a posteriori result rather than an a priori goal-seeking." [37]

Angela Breitenbach, looking at the question of teleology in biology from a Kantian perspective, argues that teleology is important as "a heuristic in the search for causal explanations of nature and . an inevitable analogical perspective on living beings." In her view of Kant, teleology implies something that cannot be explained by science, but only understood through analogy. [54]

Colin Pittendrigh coined the similar term 'teleonomy' for apparently goal-directed biological phenomena. For Pittendrigh, the notion of 'adaptation' in biology, however it is defined, necessarily "connote that aura of design, purpose, or end-directedness, which has, since the time of Aristotle, seemed to characterize the living thing" [55] This association with Aristotle, however, is problematic, because it meant that the study of adaptation would inevitably be bound up with teleological explanations. Pittendrigh sought to preserve the aspect of design and purpose in biological systems, while denying that this design can be understood as a causal principle. The confusion, he says, would be removed if we described these systems "by some other term, like 'teleonomic,' in order to emphasize that the recognition and description of end-directedness does not carry a commitment to Aristotelian teleology as an efficient causal principle." [56] Ernst Mayr criticised Pittendrigh's confusion of Aristotle's four causes, arguing that evolution only involved the material and formal but not the efficient cause. [b] Mayr proposed to use the term only for "systems operating on the basis of a program of coded information." [57]

William C. Wimsatt affirmed that the teleologicality of the language of biology and other fields derives from the logical structure of their background theories, and not merely from the use of teleological locutions such as "function" and "in order to". He stated that "To replace talk about function by talk about selection [. ] is not to eliminate teleology but to rephrase it". However, Wimsatt argues that this thought does not mean an appeal to backwards causation, vitalism, entelechy, or anti-reductionist sentiments. [58]

The biologist J. B. S. Haldane observed that "Teleology is like a mistress to a biologist: he cannot live without her but he's unwilling to be seen with her in public." [59] [60]


Basic tenets of anthropology:

  1. Holism: Holism means that a part of something can only truly be understood if examined within relation to the whole of it. For anthropologists, this means that they try to understand humankind through the interrelationships of all aspects of human existence -- for example, human biology has to be examined within the context of human cultures and vice versa. In addition, all of this must be examined within the context of the environment and historical processes. In an effort to be holistic, anthropology is often an interdisciplinary field that crosses over into other fields such as history, geology, and ecology.
  2. Relativism: Relativism means that judgments, truths, or moral values have no absolutes, and can only be understood relative to the situation or individuals involved. For anthropologists, this means that they accept that all cultures are of equal value and must be studied from a neutral point of view. A good anthropologist must disregard their own beliefs, morals, and judgments when examining another culture. They must, instead, examine each culture within the context of its own beliefs.
  3. Universalism: Universalism means that whatever the theoretical principle is, it's equally applicable to all. For anthropologists, universalism means that we believe all humans are equal -- in intelligence, complexity, etc.This is in contrast to ethnocentrism, which is the belief that some peoples are more important or culturally/biologically better than other peoples.
  4. Culture: All humans have culture. Culture is the set of learned behaviors and knowledge that belong to a certain set of people. This is different from genetically hardwired behaviors (such as reflexes) in that they aren't biologically inherited. The most important thing to remember is that culture is learned.

Biology Tutors Cost $35 - 60 per hour on average

What’s a fair price for a private Biology tutor? Biology tutors using Wyzant are professional subject experts who set their own price based on their demand and skill.

Choose Your Tutor

Compare tutor costs. With a range of price options, there&rsquos a tutor for every budget.

No Upfront Fees

Sign up, search, and message with expert tutors free of charge.

No Costly Packages

Only pay for the time you need. Whether it&rsquos one lesson or seven, you decide what to spend.

Pay After Your Lesson

Get the help you need first. You&rsquoll only be charged after your lesson is complete.

Love Your Lesson Or It&rsquos Free

Reported on by leading news outlets

Trust an expert answer

Acknowledgements

We appreciate the dedication of the BMDCS study participants and their families, and the support of Dr. Karen Winer, Scientific Director of this effort. We are also grateful to all the ALSPAC families who took part in this study, the midwives for their help in recruiting them, and the whole ALSPAC team, which includes interviewers, computer and laboratory technicians, clerical workers, research scientists, volunteers, managers, receptionists, and nurses.

Review history

The review history is available as Additional file 3.

Peer review information

Tim Sands was the primary editor of this article and managed its editorial process and peer review in collaboration with the rest of the editorial team.


Conclusions

In summary, we have focused on an essential problem in the field of regulatory variant prioritization and disease-associated gene detection. Considering the importance of cellular chromatin states, we have developed a context-dependent method to quantify the regulatory potential of genetic variants in a particular tissue/cell type. Previous studies suggest that a single tissue/cell type-specific epigenetic mark, such H3K4me3 [18] or H3K27ac [19], could be used to fine-map GWAS loci for particular diseases/traits. Our context-dependent prioritization method uses the integrative effect of multiple chromatin states to identify functional regulatory variants. Building on our previous context-free regulatory variant prediction method, we have demonstrated that context-dependent epigenomic weighting can improve identification of both variant-level and gene-level susceptible loci in GWAS. We will frequently update epigenomes data for more tissues/cell types and integrate cepip into our comprehensive downstream analysis platform KGGSeq in the future [56, 57].


Reporter's comments

It would have been interesting to know whether 'traditional' virulence effectors could be exported through the flagellum export apparatus when the vir plasmid was present but the TTSS on it was knocked out. Such experiments would also raise questions about the signals required for export through the two systems, especially if each system secreted only a discreet set of effectors. The identity of the other Fops remains to be elucidated. Those not involved in flagellum synthesis could be important for bacterial virulence. The identification of YplA as a Fop, and the fact that expression of yplA is within a flagellar transcriptional regulon, raises the fascinating possibility that YplA was once involved in flagellum biosynthesis but has since been co-opted for a role in virulence.


Watch the video: IB Biology - Introduction to Cells - Interactive Lecture (January 2022).