Transportation By Road

Road transport started with the development of tracks by us and our ‘beasts of burden’. The first forms of road transport were horses and oxen which were used for carrying goods over tracks that often followed game trails for food along routes such as the Natchez Trace, a historic forest trail within the United States of America. That trail extends roughly 440 miles from Nashville, Tennessee to Natchez, Mississippi and links the Cumberland, Tennessee and Mississippi rivers. It was created and used by Native Americans for centuries and later used by early European as well as American explorers, traders and emigrants in the late 18th and early 19th centuries. European Americans founded inns along the Trace to serve food and lodging to travellers, however as travel shifted to steamboats on the Mississippi and other rivers, most of these inns were closed. Today the path is commemorated by a parkway which uses the same name as well as following the approximate path of the Trace. In the Palaeolithic Age, we did not need constructed tracks in open country and the first improved trails would have been at fords, mountain passes and through swamps. The first improvements were made by clearing trees and big stones from the path and as commerce increased, the tracks were often flattened or widened to more easily accommodate human and animal traffic. Some of these dirt tracks were developed into fairly extensive networks, thereby allowing for communications, trade and governance over wider areas. The Incan Empire in South America and the Iroquois Confederation in North America, neither of which had the wheel at that time, are examples of effective use of such paths. The first transportation of goods was made on human backs and heads, but the use of pack animals, including donkeys and horses, was developed during the Neolithic Age. The first vehicle is believed to have been the travois, from the French ‘travail’, a frame for restraining horses.

Cheyenne using a Travois.

Travois were probably used in other parts of the world before the invention of the wheel and developed in Eurasia after the first use of bullocks for the pulling of ploughs. In about 5000 BC, sleds were developed which are more difficult to build than travois, but are easier to propel over smooth surfaces. Pack animals, ridden horses and bullocks dragging travois or sleds require wider paths and higher clearances than people on foot and so improved tracks were required. By about 5000 BC, proper roads were developed along ridges in England to avoid crossing rivers and getting bogged down. Travellers have used these ridgeways for a great many years and the Ridgeway National Trail itself follows an ancient path from Overton Hill near Avebury to Streatley. It then follows footpaths and parts of the ancient Icknield Way through the Chiltern Hills to Ivinghoe Beacon in Buckinghamshire. Ridgeways provided a reliable trading route to the Dorset coast and to the Wash in Norfolk as the high and dry ground made travel easy and provided a measure of protection by giving traders a commanding view, warning against potential attacks. During the Iron Age, the local inhabitants took advantage of the high ground by building hill-forts along the Ridgeway to help defend the trading route. Then, following the collapse of Roman authority in Western Europe, the invading Saxon and Viking armies used it. In medieval times and later, the ridgeways found use by drovers, moving their livestock from the West Country and Wales to markets in the Home Counties and London. Before the Enclosure Acts of 1750, or to use the archaic spelling ‘Inclosure Acts, covered the enclosure of open fields and common land in England and Wales, creating legal property rights to land previously held in common. Between 1604 and 1914, over 5,200 individual enclosure acts were passed, affecting just under 11,000 square miles. Before these enclosures in England, a portion of the land was categorised as ‘common’ or ‘waste’ and whilst common land was under the control of the lord of the manor, certain rights on the land such as pasture, pannage or estovers (an allowance of land made to a person out of an estate, or other thing for their support) were held variously by certain nearby properties, or occasionally ’in gross’ by all manorial tenants. ‘Waste’ was land without value as a farm strip, often very narrow areas (typically less than a yard wide) in difficult locations such as cliff edges, or awkwardly shaped manorial borders but also bare rock. Waste was not officially used by anyone, and so was often farmed by landless peasants. The remaining land was organised into a large number of narrow strips, each tenant possessing a number of strips throughout the manor and what might now be termed a single field would have been divided under this system amongst the lord and his tenants, whilst poorer peasants were allowed to live on the strips owned by the lord in return for cultivating his land. The system facilitated common grazing and crop rotation. Once enclosures started, the paths developed through the building of earth banks and the planting of hedges.

A Greek street – 4th or 3rd century BC.

Wheels appear to have been developed in ancient Sumer in Mesopotamia around 5000 BC, perhaps originally for the making of pottery. Their original transport use may have been as attachments to travois or sleds to reduce resistance. Most early wheels appear to have been attached to fixed axles, which would have required regular lubrication by animal fats, vegetable oils or separation by leather to be effective. The first simple two-wheel carts, apparently developed from travois, appear to have been used in Mesopotamia and northern Iran in about 3000 BC and two-wheeled chariots appeared in about 2800 BC. They were hauled by onagers, an Asiatic wild ass related to donkeys. Heavy four-wheeled wagons were then developed about 2500 BC but which were only suitable for oxen-haulage and were therefore only used where crops were cultivated. Two-wheeled chariots with spoked wheels appear to have been developed around 2000 BC by the Andronovo culture in southern Siberia and Central Asia and at much the same time the first primitive harness was invented, enabling horse-drawn haulage. Wheeled-transport created the need for better roads, as natural materials were generally not found to be both soft enough to form well-graded surfaces and strong enough to bear wheeled vehicles, especially when it was wet, and stay intact. In urban areas it became worthwhile to build stone-paved streets and the first paved streets appear to have been built around 4000 BC. Log roads, made by placing logs perpendicular to the direction of the road over a low or swampy areas resulted in an improvement over impassable mud or dirt roads, but rough in the best of conditions and a hazard to horses due to the shifting of loose logs. These log roads were built in Glastonbury in 3300 BC and brick-paved roads were built in the Indus Valley on the Indian subcontinent from around the same time. Then improvements in metallurgy meant that by 2000 BC, stone-cutting tools were generally available in the Middle East and Greece allowing local streets to be paved. In 500 BC Darius the Great started an extensive road system for Persia, including the famous Royal Road which was one of the finest highways of its time and which was used even after Roman times. Because of the road’s superior quality, mail couriers could travel almost 1,700 miles in seven days.

A map of Roman roads in 125CE.

With the advent of the Roman Empire, there was a need for armies to be able to travel quickly from one area to another, and existing roads were often muddy, which greatly delayed the movement of large masses of troops. To solve this problem, the Romans built great roads which used deep roadbeds of crushed stone as an underlying layer to ensure that they kept dry, as the water would flow out from the crushed stone, instead of becoming mud in clay soils. The legions made good time on these roads and some are still used now. On the more heavily travelled routes, there were additional layers that included six sided capstones, or pavers, that reduced the dust and reduced the drag from wheels. These pavers allowed the Roman chariots to travel very quickly, ensuring good communication with the Roman provinces. Farm roads were often paved first towards town, to keep produce clean. Early forms of springs and shock absorbers to reduce the bumps were incorporated in horse-drawn transport, as the original pavers were sometimes not perfectly aligned. But Roman roads deteriorated in medieval Europe because of a lack of resources and skills to maintain them. The alignments are still partially used today though, like on areas of our A1 road. The earliest specifically engineered roads were built during the British Iron Age and the road network was expanded during the Roman occupation. New roads were added in the Middle Ages, from the 17th century onwards and as life slowly developed and became richer, especially with the Renaissance, new roads and bridges began to be built, often based on Roman designs. More and more roads were built, but responsibility for the state of the roads had lain with the local parish since Tudor times. Then in 1656 the parish of Radwell, Hertfordshire petitioned Parliament for help in order to maintain their section of the Great North Road. Parliament passed an act which gave the local justices powers to erect toll-gates on a section of the Great North Road, between Wadesmill in Hertfordshire, Caxton in Cambridgeshire and Stilton in Huntingdonshire for a period of eleven years and the revenues so raised were to be used for the maintenance of the Great North Road in their jurisdictions. The toll-gate erected at Wadesmill became the first effective toll-gate in England. Then came the Turnpike Act in 1707, beginning with a section of the London to Chester road between Fornhill and Stony Stratford. The idea was that the trustees would manage resources from the several parishes through which the highway passed, augment this with tolls from users from outside the parishes and apply the whole to the maintenance of the main highway. This became the pattern for a growing number of highways to have tolls on them and was sought by those who wished to improve the flow of commerce through their part of a county. At the beginning of the 18th century, sections of the main radial roads into London were put under the control of individual turnpike trusts. The pace at which new turnpikes were created picked up in the 1750s as trusts were formed to maintain the cross-routes between the Great Roads radiating from London. Roads leading into some provincial towns, particularly in Western England, were put under single trusts and key roads in Wales were then turnpiked. In South Wales, the roads of complete counties were put under single turnpike trusts in the 1760s. Turnpike trusts grew, such that by 1825 about 1,000 trusts controlled 18,000 miles of road in England and Wales. Interestingly, from the 1750s these Acts required trusts to erect milestones indicating the distance between the main towns on the road. Users of the road were obliged to follow what were to become rules of the road, such as driving on the left and not damaging the road surface. Trusts could also take additional tolls during the summer to pay for watering the road in order to lay the dust thrown up by fast-moving vehicles. Parliament then passed a few general acts dealing with the administration of the trusts along with the restrictions on the width of wheels, as narrow wheels were said to cause a disproportionate amount of damage to the road. Construction of roads improved slowly, initially through the efforts of individual surveyors such as John Metcalf in Yorkshire in the 1760s. British turnpike builders began to realise the importance of selecting clean stones for surfacing, and excluding vegetable material and clay to make better lasting roads. Later, after the ending of the turnpike trusts, roads were funded from taxation and so it was that gradually a proper network of roadways was developed in Britain in order to supplement the use of rivers as a system of transportation and many of these roadways were developed as a result of trading of goods and services, such as wool, sheep, cattle and salt as they helped link together market towns as well as harbours and ports. Other roadways were developed to meet the needs of pilgrims visiting shrines such as Walsingham, even to the transporting of corpses from isolated places to local graveyards. Also during medieval England were built the “Four Highways” and Henry of Huntingdon wrote that the Ermine Street, Fosse Way, Watling Street and Icknield Way were constructed by royal authority. Two new vehicle duties were introduced, the ‘Locomotive duty’ and the ‘Trade Cart duty’ in the 1888 budget and since 1910, the proceeds of road vehicle excise duties have been dedicated to fund the building and maintenance of the road system. From 1920 to 1937, most roads in the United Kingdom were funded from this Road Fund using taxes raised from fuel duty and Vehicle Excise duty but since 1937, roads have been funded from general taxation with all motoring duties, including VAT, being paid directly to the Treasury. Tolls or congestion charges are still used for some major bridges and tunnels, for example the Dartford Crossing has a congestion charge. The M6 Toll road, originally the Birmingham Northern Relief Road, is designed to relieve the M6 through Birmingham as the latter is one of the most heavily used roads in the country. There were two public toll roads, Roydon Road in Stanstead Abbots, Hertfordshire and College Road in Dulwich, London and about five private toll roads. However, since 2006 congestion charging has been in operation in London and Durham. Before 14 December 2018, the M4’s Second Severn Crossing, officially ‘The Prince of Wales Bridge’ included tolls, but after being closed for three days for toll removal the bridge opened up again on 17 December 2018 starting with a formal ceremony and toll payment was scrapped. It made its mark in history as it is believed to be the first time in 400 years that the crossing was free!

After the election of the Labour government in 1997, most existing road schemes were cancelled and problem areas of the road network were then subjected to a range of studies to investigate non-road alternatives. In 1998 it was proposed to transfer parts of the English trunk road network to local councils, retaining central control for the network connecting major population centres, ports, airports, key cross-border links and the Trans-European Road Network. Since then, various governments have continued to implement new schemes to build new roads and widen existing ones as well as review other transport infrastructures because between 1980 and 2005 traffic increased by 80%, whilst road capacity increased by just 10%. Naturally, concern has been raised, especially in terms of damage to the countryside. Also, on 4 June 2018, a change in the law meant that learner drivers, who had previously been banned from driving on motorways, were allowed to use them when accompanied by a driving instructor in a car with dual controls. Because motorway driving is not offered as part of the practical driving test in the United Kingdom, these measures were put in place in an effort to teach motorway safety.

As so often happens when researching a subject such as this, the more one finds it seems the more there is to be found! Suffice to say that in addition to the above there is so much more that can be said about transportation by road, but I think this is enough for now!

This week…
Did you hear about the man who began a career by writing dirty jokes, but then went on to create proper poetry? He went from bawd to verse…

Click: Return to top of page or Index page


Whilst browsing through websites for some information, I found the following question which was: “Will aliens have similar mathematics and natural science in this universe if they exist? One answer given was “Not just similar, but identical. Their bit of universe will likely be exactly like ours, and if they explore it, they will arrive to the same description of it as we do. If they are able to come here, their physics are likely to be more advanced than ours, but everything we have in common will be identical. They may or may not use a different base in mathematics than we do. We commonly use base 10, but the principles of mathematics is actually independent of the base, and we can as easily do the same mathematics with base 2 (computers already do that), 16 (people often use that as a compromise when talking to computers) or 20 (ancient Mayans, for instance). It’s all down to what axioms they decide to use, and if they want their mathematics to be useful to describe physics, they have to use the same axioms as well.” There are also a few folk here on Earth with crazy ideas and one person asked if there is a risk that hostile aliens could find the location of Earth and invade. Of course, the question really is how big is the risk – because of course it will never be demonstrably zero. It is impossible to prove something doesn’t exist, even when it’s as intangible as a risk. Therefore, it only makes sense to look at the factors that decrease the risk – at what makes it unlikely that hostile aliens could find the location of Earth and invade. The following is a reply given by a scientist. “First, consider interstellar separation. Our current knowledge of physics implies that nothing can travel faster than light and anything which does approach that speed suffers massively from the effects of time dilation. So either the aliens will take tens of thousands of years to travel from star to star, or time dilation makes it a one-way trip because if they return to their home planet, it will be tens of thousands of years older than when they left. The distance they must cover is mind-bogglingly huge, and the trip is expensive, dangerous, and long – unless they’ve cracked the light-speed barrier, which is very unlikely. Next, the likelihood of Earth being a useful target is low because there are few planets that are even similar to, let alone the same as, another. The aliens would have to find the Earth to be the most viable source of something valuable to them, even though their planet is vastly different from Earth. Their needs, through evolution, will match what exists on their own planet rather than here. Volatiles (hydrogen, methane, etc.) are easier to gather from gas giants and moons, metals are easier to mine from asteroids and comets. Of course, even if there are aliens that find our planet useful, they could be on the other side of the galaxy rather than anywhere near Sol. It’s likely they’ll never find us in the galactic forest or through all the clutter of gas, other systems, and so on. But for what purpose would they be hostile? There is as much if not more chance they would be indifferent, or helpful. Why travel across interstellar distances just to pick a fight? Following on from that, with great intelligence comes great insight and inquisitiveness and the effort towards scientific advancement. These things tend to replace or least greatly diminish the initial basic instincts of fear, suspicion and violent tendencies. Finally, these aliens need to exist in the same time period we do. The universe and our galaxy have great age – a long past covering billions of years, and an equally long future. We have existed for a mere eye-blink of time. The aliens probably wouldn’t arrive until long after we leave, if we ever learn the secret of getting around from star to star like they do. Either that, or they arrived before we existed and moved on. With all those factors counting against invasion, it seems there’s a very low risk”. I also recall an episode of ’Star Trek – The Next Generation’ which involved languages. In it, Deanna Troi, the ship’s counsellor, picked up what to you and I would be a drinking cup. But she pointed out to the captain, Jean-Luc Picard, that were he to show this item to someone from a different galaxy, they might perceive the cup in quite a different manner. For example, they might see it as a treasured item, to be revered, something originally owned or used by a great ruler. Or it might be symbolic, an item shown one to another to demonstrate overcoming an enemy and in that way creating a friendship between nations. Then again, it might be an item for two leaders to drink from, thus sharing an agreement. Different countries on Earth use language as it is a structured system of communication used by us humans. Languages can be based on speech and gesture, it can be spoken, by sign or written. The structure of language is its grammar and the components are its vocabulary. Many of our languages, including the most widely-spoken ones, have writing systems that enable sounds or signs to be recorded for future use. Our language is unique among the known systems of animal communication in that it is not dependent on a single mode of transmission (sight, sound, etc.), it is highly variable between cultures and across time, it also affords a much wider range of expression than other systems. Human languages have the properties of productivity and displacement, they also rely on social convention and learning. Estimates of the number of human languages in the world vary between 5,000 and 7,000, though precise estimates depend on an arbitrary distinction being established between languages and dialects. Natural languages are spoken, signed or both. However, any language can be encoded into secondary media using auditory, visual, or tactile stimuli, for example writing, whistling, signing, signalling or braille.

The English word ‘language’ derives ultimately from a Proto-Indo-European tongue through Latin and Old French. The word is sometimes used to refer to codes, ciphers and other kinds of artificially-constructed communication systems such as ways used for computer programming. Over the years there have been attempts to define what language is and one definition sees language primarily as the mental faculty that allows humans to undertake linguistic behaviour, to learn languages and to produce and understand utterances. This definition stresses the universality of language to all humans, and it emphasises the biological basis for the human capacity for language as a unique development of the human brain. But another definition sees language as a formal system of signs governed by grammatical rules in combination to communicate meaning. This definition stresses that human languages can be described as closed, structural systems consisting of rules that relate particular signs directly to particular meanings.

A conversation in American Sign Language.

Throughout history, humans have speculated about the origins of language but interestingly theories about the origin of language differ in regard to their basic assumptions about what language actually is. Some theories are based on the idea that language is so complex that one cannot imagine it simply appearing from nothing in its final form, but that it must have evolved from earlier pre-linguistic systems among our pre-human ancestors. The opposite viewpoint is that language is such a unique human trait that it cannot be compared to anything found among non-humans and that it must therefore have appeared suddenly in the transition from pre-hominids to early man. Because language emerged in the early prehistory of man, before the existence of any written records, its early development has left no historical traces, and it is believed that no comparable processes can be observed today. Theories which stress continuity of language often look at animals to see if, for example, primates display any traits that can be seen as analogous to what pre-human language must have been like and to this end, early human fossils have been inspected for traces of physical adaptation to language use or pre-linguistic forms of symbolic behaviour. Among the signs in human fossils that may suggest linguistic abilities are the size of the brain relative to body mass, the presence of a larynx which is capable of advanced sound production as well as the nature of tools and other manufactured artefacts. The formal study of language is often considered to have started in India with Pānini, a 5th century BC scholar of grammar who formulated 3,959 rules of Sanskrit. However, Sumerian scribes already studied the differences between Sumerian and Akkadian grammar around 1900 BC. Subsequent grammatical traditions developed in all of the ancient cultures that adopted writing. In the 17th century AD, the French developed the idea that the grammars of all languages were a reflection of the universal basics of thought, and therefore that grammar was universal. Spoken language relies on the human physical ability to produce sound, a longitudinal wave propagated through the air at a frequency capable of vibrating the ear drum. This ability depends on the physiology of the human speech organs. These organs consist of the lungs, the voice box (larynx) and the upper vocal tract – the throat, the mouth, and the nose. By controlling the different parts of the speech apparatus, the airstream can be manipulated to produce different speech sounds. Some of these speech sounds, both vowels and consonants, involve release of air flow through the nasal cavity. Other sounds are defined by the way the tongue moves within the mouth such as the l-sounds, called laterals as the air flows along both sides of the tongue, and the r-sounds. By using these speech organs, humans can produce hundreds of distinct sounds. Some appear very often in the world’s languages, whilst others are more common in particular language families, areas, or even specific to a single language.

An ancient Tamil inscription at Thanjavur.

But languages express meaning by relating a sign form to a meaning, or its content. Sign forms must be something that can be perceived, for example, in sounds, images, or gestures, and then related to a specific meaning by social convention. Because the basic relation of meaning for most linguistic signs is based on social convention, linguistic signs can be considered arbitrary, in the sense that the convention is established socially and historically, rather than by means of a natural relation between a specific sign form and its meaning. As a result, languages must have a vocabulary of signs related to specific meaning. The English sign “dog” denotes, for example, a member of the species ‘Canis Familiaris’. Depending on its type, language structure can be based on systems of sounds (speech), gestures (sign languages), or graphic or tactile symbols (writing). All spoken languages use segments such as consonants or vowels, many use sound in other ways to convey meaning, like stress, pitch and duration of tone whilst writing systems represent language using visual symbols, which may or may not correspond directly to the sounds of spoken language. Because all languages have a very large number of words, no purely logographic scripts are known to exist, although the best-known examples of a logographic writing system are Chinese and Japanese. Written language represents the way spoken sounds and words follow one after another by arranging symbols (letters, numbers, etc) according to a pattern that follows a certain direction. The direction used in a writing system is entirely arbitrary and established by convention. Some writing systems use the horizontal axis (left to right as the Latin script, or right to left as the Arabic script), whilst others such as traditional Chinese writing use the vertical dimension (from top to bottom). A few writing systems use opposite directions for alternating lines, and others, such as the ancient Maya script, can be written in either direction and rely on graphic cues to show the reader the direction of reading. In order to represent the sounds of the world’s languages in writing, linguists have developed the International Phonetic Alphabet which is designed to represent all of the discrete sounds that are known to contribute to meaning in human languages.

The Basic Structure of an English Sentence.

It is not realistically possible in this blog post for me to go into such things as grammar, parts of speech, word classes and syntax, especially as languages differ so widely in how much they rely on processes of word formation. For example, an English sentence can be analysed in terms of grammatical functions, like “The cat” is the subject of the phrase, “on the mat” is a locative phrase, and “sat” is the core of the predicate. Another way in which languages convey meaning is through the order of words within a sentence. The grammatical rules, or syntax, determine why a sentence in English such as “I love you” is meaningful, but “love you I” is not. Syntactical rules determine how word order and sentence structure is constrained, and how those constraints contribute to meaning. For example, in English, the two sentences “the slaves were cursing the master” and “the master was cursing the slaves” mean different things, because the role of the grammatical subject is encoded by the noun being in front of the verb, and the role of object is encoded by the noun appearing after the verb. What can make other languages difficult to learn is because the above rules may be different through other languages! I will not go into detail over these things or aspects like the ‘accusative case’ and the ’nominative case’ which are far beyond me! Suffice to say it has been found that whilst we have the ability to learn any language, we do so if we grow up in an environment in which that language exists and is used by others. Language is therefore dependent on communities of speakers, most usually where children learn language from their elders and peers and they themselves transmit language to their own children.

Owing to the way in which language is transmitted between generations and within communities, language perpetually changes, diversifying into new languages or converging due to contact with others. The process is similar to the process of evolution, but languages differ from biological organisms in that they readily incorporate elements from other languages through the process of diffusion, as speakers of different languages come into contact. Humans also frequently speak more than one language, often acquiring their first language or languages as children, then learning new languages as they grow up. Because of the increased language contact in our globalising world, many small languages are becoming endangered as their speakers shift to other languages that then afford the possibility to participate in larger and more influential speech communities. For a while, the Welsh language feared to be was dying out but happily more and more people are speaking it, as well as it being taught. Some years ago I learned a few words of Welsh and was amazed to find how similar some words in that language were to other languages, for example French. I have also had a look at Old English, but as my research proved, despite Old English being the direct ancestor of modern English, it is almost unintelligible to contemporary English speakers.

The first page of the poem Beowulf, written in Old English in the early medieval period (800–1100 AD).

To finish this week, I have included a point which actually relates to the main text above, but which I feel is quite humorous and it is this.
Many languages have grammatical conventions that signal the social position of the speaker in relation to others, like saying “your honour” when addressing a judge. But in one Australian language, a married man must use a special set of words to refer to everyday items when speaking in the presence of his mother-in-law…

Click: Return to top of page or Index page

Change Is All Around Us

Every second of the day, things change. Some lives begin, some lives end, new ideas surface whilst other things fall out of use. A little while ago I saw a tv item about a person who was considering ending his life by jumping off a motorway bridge. Thankfully they were persuaded not to, it was clearly a call for help. But in another instance one person sadly did end their life and the police had to close the road for several hours until absolutely all evidence of the tragedy had been cleared away. It was sad to learn of this, but also sad to learn that some people were found simply sitting in their cars until the motorway could be re-opened and were angry and frustrated by the delays this event had caused. It seems that some people get upset and annoyed about things that they cannot control, just as when the day dawns and the rain falls. Surely we should do our very best to cope with that change. In the latter instance there were queues of traffic on the motorway, perhaps cars getting low on fuel along with children getting fractious, lorry drivers having to park up because of the hours they were allowed to drive, people missing holiday flights or perhaps even cruises. Things happen that we can deal with, whilst at other times we cannot. As a family we all enjoyed going on holiday and getting to North Devon was considered part of that holiday. We saw places we would otherwise not have known about, I was taught map-reading and learned a good sense of direction. When traffic jams occurred we looked around at other vehicles, learning makes and models, identifying registration plates to see where they were registered and how old they were. It all helped to pass the time. As time went on and I got older, I did my level best to try to minimise the stresses and strains of whatever I was doing, at least as best I could. I would plan ahead, managing the things that I was able to and not getting wound up over things that I could not reasonably control. So far as holidays were concerned, I would pack my bags the night before. If I was going abroad, perhaps flying from London, I would go down to a nearby hotel the night before. On the flight, where I could I noted where we were, although I will admit that on journeys back from the U.S.A. I tried to get a flight that left at a time such that I was back at Heathrow in the morning! It meant that I was able to sleep for much of the flight and was refreshed when we landed. I tried to make the best of my circumstances. Likewise on my lovely cruise holiday I went down to Southampton the day before the cruise began, so I would be there in good time and not be delayed or unduly stressed. The weather during much of the cruise was very good, so it wasn’t often that the sea was rough. I became used to that, in fact the gentle rocking movement was quite relaxing. At least I considered it that way, sadly a few of the other passengers weren’t quite so comfortable. But they were the ones who also wanted air-conditioned coaches on our bus tours and not all places had those. Some folk became quite agitated, angry even. Over the years I have seen how both stress and worry affects different people in vastly different ways. Some would always see the negative side to a situation, others a very positive one and a few had a balanced view of things. Something I was taught many years ago and really liked was a prayer that I later found out was known as the Serenity Prayer, written by the American theologian Reinhold Niebuhr (1892–1971). It is commonly quoted as follows:

Serenity Prayer.

It seems to me, especially since being in the Care Home I am presently in, how we can so easily lose sight of what one might consider to be the ‘bigger picture’ and concentrate too much on the minor things that are important but not quite as vital. I recall a very good film where some people found themselves stuck in a lift which had stopped between floors. One person decided that they were going to ask his girlfriend to marry him, whilst others had similar positive thoughts so all but one person waited patiently for help to come to get them freed. Except this one dear lady who wanted there to be immediate action. She may not have liked what was happening to her as she was not in control of what was happening, but others calmed her down. She finally sat down and frantically searched right through her handbag, calling out “Where are my Tic-Tacs???”. She could not grasp why everyone else was looking at her… At various points in our lives I am sure that all of us will have various difficulties to overcome. It may be within ourselves, with a relative or a friend. It is never easy at such times to simply stop, take a deep breath, then consider what options we have. In this Care Home there are some inmates who have dementia, they are unable to think rationally or logically. One inmate, sadly no longer alive now, would go around the place ’tidying up’, moving things around. Except they moved such things as ‘wet floor’ notices, which meant other inmates could wander around and slip on a wet floor. Covid-19 has been a real problem as many of the inmates get into a routine, which ordinarily is good, but when they need to be isolated for a while rather than mix with others in the dining room or tv lounge, they have difficulty in understanding. I have learned that dementia does make some folk behave like young children. Equally, some want certain things laid out in a particular way, like pot-plants, but due to the inmate’s age the plants are sometimes knocked over and so the soil goes everywhere. It is also for that reason that most inmates have meals together the dining rooms, as it is easier for Carers to tend to them. Some inmates need bibs, others are coaxed into eating, though I know in my case I have had to be careful how much I eat because the food is good and I am sometimes given too much!

I have said before about following rules and regulations, in particular how important it is that we follow them. In the early days of train transport and other motor vehicles, especially where certain rules and regulations were put in place. Over the centuries of the human race we have had rules and regulations and a major one is quite well-known, this being the Code of Hammurabi. It is a Babylonian legal text which was composed c. 1755–1750 BC. It is the longest, best-organised, and best-preserved legal text from the ancient Near-East and is written in the Old Babylonian dialect of Akkadian and is purported to have been written by Hammurabi, sixth king of the First Dynasty of Babylon. The primary copy of the text is inscribed on a basalt or diorite ‘stele’ (plural stelae), some 7ft 4 1⁄2in (2.25m) tall. A stele (pronounced ’Stee-Lee) or occasionally ‘stela’ when derived from Latin, is a stone or wooden slab, generally taller than it is wide, erected in the ancient world as a monument. The surface of the stele often has text, ornamentation, or both and these may be inscribed, carved in relief, or painted. Stelae were created for many reasons. Grave stelae were used for funerary or commemorative purposes. Stelae as slabs of stone would also be used as Ancient Greek and Roman government notices or to mark border or property lines. They were also occasionally erected as memorials to battles. For example, along with other memorials, there are more than half-a-dozen steles erected on the battlefield of Waterloo at the locations of notable actions by participants in battle. Traditional Western gravestones may technically be considered the modern equivalent of ancient stelae, though the term is very rarely applied in this way. Equally, stele-like forms in non-Western cultures may be called by other terms, and the words ‘stele’ and ‘stelae’ are most consistently applied in archeological contexts to objects from Europe, the ancient Near East and Egypt, China, as well as Pre-Columbian America. The stele showing the Code of Hammurabi was discovered in 1901 at the site of Susa in present-day Iran, where it had been taken as plunder six hundred years after its creation. The text itself was copied and studied by Mesopotamian scribes for over a millennium. The stele now resides in the Louvre Museum. The top of the stele features an image in relief of Hammurabi with Shamash, the Babylonian sun-god and god of justice. Below the relief are about 4,130 lines of cuneiform text, one fifth contains a prologue and epilogue in poetic style, whilst the remaining four-fifths contain what are generally called the laws. In the prologue, Hammurabi claims to have been granted his rule by the gods “to prevent the strong from oppressing the weak”. The laws are in a ‘casuistic’ form, expressed as logical ‘if…then’ conditional sentences. Their scope is broad, including criminal, family, property and commercial law. Modern scholars have responded to the Code with admiration, at its perceived fairness and respect for the rule of law and at the complexity of Old Babylonian society. There has also been much discussion of its influence on Mosaic law, primarily referring to the Torah or the first five books of the Hebrew bible. Despite some uncertainty surrounding these issues, Hammurabi is regarded outside Assyriology as an important figure in the history of law, and the document as a true legal code. The U.S. Capitol has a relief portrait of Hammurabi alongside those of other lawgivers, and there are replicas of the stele in numerous institutions, including the United Nations headquarters in New York City and the Pergamon Museum in Berlin.

Babylonian territory before (red) and after (orange) Hammurabi’s reign.

Hammurabi ruled from 1792 to 1750 BC and he secured Babylonian dominance over the Mesopotamian plain through military prowess, diplomacy, and treachery. When he inherited his father’s throne, Babylon held little local control. The local leader was Rim-Sin of Larsa. Hammurabi waited until Rim-Sin grew old, then conquered his territory in one swift campaign, leaving his organisation intact. Later, Hammurabi betrayed allies in nearby territories in order to gain their control. Hammurabi had an aggressive foreign policy, but his letters suggest he was concerned with the welfare of his many subjects and was interested in law and justice. He commissioned extensive construction works and in his letters he frequently presented himself as his ‘people’s shepherd’. Justice was also a theme of the prologue to his Code. Although Hammurabi’s Code was the first Mesopotamian law collection discovered it was not the first written. Several earlier collections survive. These collections were written in Sumerian and Akkadian, they also purport to have been written by rulers. There were almost certainly more such collections, as statements of other rulers suggesting the custom was widespread and the similarities between these law collections make it tempting to assume a consistent underlying legal system. There are additionally thousands of documents from the practice of law, from before and during the Old Babylonian period. These documents include contracts, judicial rulings, letters on legal cases as well as reform documents. Mesopotamia has the most comprehensive surviving legal corpus from before the Digest of Justinian, even compared to those from Rome and ancient Greece.

The Royal City (left) and Acropolis (right) of Susa in 2007.

The whole Code of Hammurabi is far too long to detail in this blog post. Just the prologue and epilogue together occupy one-fifth of the text! Out of around 4,130 lines, the prologue occupies 300 lines and the epilogue occupies 500. The 300-line prologue begins with an etiology or study of its origination to Hammurabi’s royal authority and in it, Hammurabi lists his achievements and virtues. Unlike the prologue, the 500-line epilogue is explicitly related to the laws and begins with the words “these are the just decisions which Hammurabi has established”. He exalts his laws and his magnanimity, he then expresses a hope that “any wronged man who has a lawsuit may have the laws of the stele read aloud to him and know his rights”. Hammurabi wished for good fortune for any ruler who heeded his pronouncements and respected his stele, however, at the end of the text he invoked the wrath of the gods on any man who disobeyed or erased his pronouncements. The epilogue contained much legal imagery, and the phrase “to prevent the strong from oppressing the weak” is reused from the prologue. However, the king’s main concern appears to be ensuring that his achievements are not forgotten and his name not sullied. The list of curses heaped upon any future defacer is 281 lines long and extremely forceful and some of the curses are very vivid, for example “may the god Sin decree for him a life that is no better than death”; “may he (the future defacer) conclude every day, month, and year of his reign with groaning and mourning” and “may he experience the spilling of his life force like water”. Hammurabi implored a variety of gods individually to turn their particular attributes against the defacer. For example: “may the Storm God deprive him of the benefits of rain from heaven and flood from the springs” and “may the God of Wisdom deprive him of all understanding and wisdom and lead him into confusion”. Time passed and the essential structure of international law was mapped out during the European Renaissance period, though its origins lay deep in history and can be traced to cooperative agreements between peoples in the ancient Middle East. Many of the concepts that today underpin the international legal order were established during the Roman Empire and the ‘Law of Nations’, for example, was invented by the Romans to govern the status of foreigners and the relations between foreigners and Roman citizens. In accord with the Greek concept of natural law, which they adopted, the Romans conceived the law of nations as having universal application. In the Middle Ages, the concept of natural law, along with religious principles through the writings of Jewish philosophers and theologians, became the intellectual foundation of the new discipline of the law of nations, regarded as that part of natural law that applied to the relations between sovereign states. After the collapse of the western Roman Empire in the 5th century, Europe suffered from frequent warring for nearly 500 years. Eventually, a group of nation states emerged and a number of sets of rules were developed to govern international relations. In the 15th century the arrival of Greek scholars in Europe from the collapsing Byzantine Empire and the introduction of the printing press spurred the development of scientific, humanistic, and individualist thought, whilst the expansion of ocean navigation by European explorers spread European norms throughout the world and broadened the intellectual and geographic horizons of western Europe. The subsequent consolidation of European states with increasing wealth and ambitions, coupled with the growth in trade, necessitated the establishment of a set of rules to regulate their relations. In the 16th century the concept of sovereignty provided a basis for the entrenchment of power in the person of the king and was later transformed into a principle of collective sovereignty as the divine right of kings gave way constitutionally to parliamentary or representative forms of government. Sovereignty also acquired an external meaning, referring to independence within a system of competing nation-states. Scholars expanded new writings focussing greater attention on the law of peace and the conduct of international relations than on the law of war, as the focus of this shifted away from the conditions necessary to justify the resort to force in order to deal with increasingly sophisticated relations in areas such as the law of the sea and commercial treaties. Various philosophies grew, bringing with them the acceptance of the concept of natural rights, which played a prominent role in the American and French revolutions and which was becoming a vital element in international politics. In international law, however, the concept of natural rights had only marginal significance until the 20th century. It was only after the two World Wars in the 20th century that brought about the real growth of international organisations, for example the League of Nations, founded in 1919 and the United Nations, founded in 1945. This led to the increasing importance of human rights. Having become geographically international through the colonial expansion of the European powers, international law became truly international in the first decades after World War II, when decolonisation resulted in the establishment of scores of newly independent states. The collapse of the Soviet Union and the end of the Cold War in the early 1990s increased political cooperation between the United States and Russia and their allies across the Northern Hemisphere, but tensions also increased between states of the north and those of the south, especially on issues such as trade, human rights, and the law of the sea. Technology and globalisation, the rapidly escalating growth in the international movement in goods, services, currency, information, and persons, also became significant forces, spurring international cooperation and tending to reduce the ideological barriers that divided the world. However, there are still trade tensions between various countries at various times, for what seem to be at times inexplicable reasons. As I have said before, the one constant in this Universe is that things change!

This week, a familiar phrase…
The phrase “turn a blind eye” often used to refer to a wilful refusal to acknowledge a particular reality and dates back to a legendary chapter in the career of the British naval hero Horatio Nelson. During 1801’s Battle of Copenhagen, Nelson’s ships were pitted against a large Danish-Norwegian fleet. When his more conservative superior officer flagged for him to withdraw, the one-eyed Nelson supposedly brought his telescope to his bad eye and blithely proclaimed, “I really do not see the signal.” He went on to score a decisive victory. Some historians have since dismissed Nelson’s famous quip as merely a battlefield myth, but the phrase “turn a blind eye” persists to this day.

Click: Return to top of page or Index page

The History Of Rail Transport

On 21 February 1804, the world’s first steam-powered railway journey took place when Trevithick’s unnamed steam locomotive hauled a train along the tramway of the Penydarren ironworks, near Merthyr Tydfil in South Wales. But in fact, the history of rail transport began in the prehistoric times. It can be divided into several discrete periods as defined by the principal means of track material and motive power used. The Post Track, a prehistoric causeway in the valley of the River Brue in the Somerset Levels is one of the oldest known constructed trackways and dates from around 3838BC, making it some 30 years older than the Sweet Track from the same area. Various sections have actually been scheduled as ancient monuments. Evidence indicates that there was a 6 to 8.5km long Diolkos paved trackway, which transported boats across the Isthmus of Corinth in Greece from around 600 BC. Wheeled vehicles pulled by men and animals ran in grooves in limestone, which provided the track element, preventing the wagons from leaving the intended route. The Diolkos was in use for over 650 years, until at least the 1st century AD. Paved trackways were also later built in Roman Egypt. In China, a railway has been discovered in the South-West Henan province near Nanyang city. It was carbon dated to be about 2,200 years old from the Qin dynasty. The rails were made from hard wood and treated against corrosion, whilst the sleepers or railway ties were made from wood that was not treated and have therefore rotted. Qin railway sleepers were designed to allow horses to gallop through to the next rail station where they would be swapped for a fresh horse. The railway is theorised to have been used for transportation of goods to front line troops and to fix the Great Wall.

The Reisszug, as it appears today.

The oldest operational railway is the Reisszug, a funicular railway at the Hohensalzburg Fortress in Austria and is believed to date back to either 1495 or 1504AD. Cardinal Matthäus Lang wrote a description of it back in 1515 detailing that it was a cable-type which connected points along a railway track laid on a steep slope. The system is characterised by two counterbalanced carriages that are permanently attached to opposite ends of a haulage cable, which is looped over a pulley at the upper end of the track. The result of such a configuration is that the two carriages move synchronously so as one ascends, the other descends at an equal speed. This feature distinguishes funiculars from inclined elevators, which have a single car that is hauled uphill. The line originally used wooden rails with a hemp haulage rope and was operated by human or animal power, through a treadwheel. The line still exists and remains operational, although in updated form.

A mining cart, shown in De Re Metallica (1556).

Wagonways, otherwise called tramways using wooden rails and horse-drawn traffic, are known to have been used in the 1550s to facilitate transportation of ore tubs to and from mines. They soon became popular in Europe and an example of their operation is shown in an illustration by Georgius Agricola. This line used ‘Hunde’ carts with un-flanged wheels running on wooden planks with a vertical pin on the truck fitting into the gap between the planks to keep it going the right way. The miners called the wagons ‘Hunde’, or ‘dogs’ from the noise they made on the tracks. There are many references to wagonways in central Europe in the 16th century and these were introduced to England by German miners, the first being at Caldbeck, Cumbria quite possibly in the 1560s. A wagonway was built at Prescot near Liverpool some time around 1600, possibly even as early as 1594. Owned by Philip Layton, the line carried coal from a pit near Prescot Hall to a terminus about half a mile away. A funicular railway was made at Brosely in Shropshire some time before 1604 and this carried coal for James Clifford from his mines down to the River Severn, to be loaded onto barges and carried to riverside towns. The Wollaton Wagonway was completed in 1604 by Huntingdon Beaumont (c.1560–1624) who was an English coal mining entrepreneur who built two of the earliest wagonways in England for trans-shipment of coal. However, he was less successful as a businessman and died having been imprisoned for debt. The youngest of four sons, he was born to Sir Nicholas Beaumont and his wife Ann Saunders. They were an aristocratic family in the East Midlands and there were several branches to the Beaumont dynasty. This one was based at Coleorton, Leicestershire, approximately 2 miles (3.2km) east of Ashby de la Zouch. Beaumont was therefore of gentleman status in the formal Elizabethan sense, the family owned coal bearing lands and worked them. He was involved in this coal working and eventually he began working in his own right in the Nottingham area. During 1603 and 1604, during his partnership with Sir Percival Willoughby who was Lord of the Wollaton Manor, Beaumont constructed the wagonway which ran from Strelley, where Beaumont held mining leases, to Wollaton Lane. Beaumont was a successful coal prospector and an innovator in the development of mining techniques and a key innovation attributed to him is the introduction of boring rods to assist in finding coal without sinking a shaft. His working life covered involvement in coal mining activities in Warwickshire, Leicestershire, Nottinghamshire and Northumberland. His coal mining and wagonway activities in the early 1600s near Blyth in Northumberland were, like most of his ventures, unprofitable but the boring rod and wagonway technology he took with him was implemented by others to significant effect. And the wagonway chain he started in the English north east was to later influence George Stephenson. In fact a major coal seam in the region was named the Beaumont Seam, commemorating his engineering efforts there. However, Beaumont lost considerable sums of money borrowed from friends and family. He died in Nottingham Gaol in 1624 having been imprisoned for debt. The Middleton railway in Leeds, which was built in 1758, later became the world’s oldest operational railway (other than funiculars), albeit now in an upgraded form whilst in 1764, the first railway in America was built in Lewiston, New York.

The introduction of steam engines for powering air to blast furnaces led to a large increase in British iron production after the mid 1750s. In the late 1760s, the Coalbrookdale, a village in the Ironbridge Gorge in Shropshire was a settlement of great significance in the history of iron ore smelting as this is where iron ore was first smelted by Abraham Darby (14 April 1677 – 5 May 1717). He was the first and best known of several men of that name and was born into an English Quaker family that played an important role in the Industrial Revolution. Darby developed a method of producing pig iron in a blast furnace fuelled by coking coal rather than charcoal and this was a major step forward in the production of iron as a raw material. This coal was drawn from drift mines in the sides of the valley and as it contained far fewer impurities than normal coal, the iron it produced was of a superior quality. Along with many other industrial developments that were going on in other parts of the country, this discovery was a major factor in the growing industrialisation of Britain. The Coalbrookdale Company began to fix plates of cast iron to the upper surface of wooden rails, which increased their durability and load-bearing ability. At first only ‘balloon loops’, or turning loops could be used for turning wagons, but later, movable points were introduced that allowed for passing loops to be created. A system was introduced in which un-flanged wheels ran on L-shaped metal plates. It is said that a Sheffield colliery manager invented this flanged rail in 1787, though the exact date of this is disputed. The plate rail was taken up by a Benjamin Outram for wagonways serving his canals, manufacturing them at his Butterley ironworks and in 1803, a William Jessop opened the Surrey Iron Railway. This was a double track plateway, sometimes erroneously cited as world’s first public railway, in south London. By 1789 he had introduced a form of all-iron edge rail and flanged wheels for an extension to the Charnwood Forest Canal at Nanpantan, Leicestershire. Then in 1790, Jessop and his partner Outram began to manufacture edge-rails. The first public edgeway built was the Lake Lock Rail Road in 1796 as although the primary purpose of the line was to carry coal, it also carried passengers. These two systems of constructing iron railways, the “L” plate-rail and the smooth edge-rail, continued to exist side by side into the early 19th century but the flanged wheel and edge-rail eventually proved its superiority and became the standard for railways. Cast iron was not a satisfactory material for rails because it was brittle and broke under heavy loads, however the wrought iron rail, invented by John Birkinshaw in 1820, solved these problems. Wrought iron, usually referred to simply as ‘iron was a ductile material that could undergo considerable deformation before breaking, thus making it more suitable for iron rails. But this iron was expensive to produce until a Henry Cort patented the ‘puddling process’ in 1784. He had also patented the rolling process, which was fifteen times faster at consolidating and shaping iron than hammering. These processes greatly lowered the cost of producing iron and iron rails. The next important development in iron production was the ‘hot blast’ process, developed by a James Neilson and patented in 1828, which considerably reduced the amount of coke fuel or charcoal needed to produce pig iron. However, the wrought iron was a soft material that contained slag or ‘dross and this tended to make iron rails distort and delaminate so they typically lasted less than 10 years in use, and sometimes as little as one year under high traffic. All these developments in the production of iron eventually led to replacement of composite wood/iron rails with superior all-iron rails. The introduction of the Bessemer process created the first inexpensive process on an industrial scale for the mass production of steel from molten pig iron before the development of the open hearth furnace. The key principle is in the removal of impurities from the iron by oxidisation, with air being blown through the molten iron. The oxidation also raises the temperature of the iron mass and keeps it molten. This enabled steel to be made relatively inexpensively and led to the era of great expansion of railways that began in the late 1860s. Steel rails lasted several times longer than iron, they also made heavier locomotives possible, thus allowing for longer trains and improving the productivity of railways. The quality of steel had been improved by the end of 19th century, further reducing costs and as a result, steel completely replaced the use of iron in rails, becoming standard for all railways. In 1769 James Watt, a Scottish inventor and mechanical engineer, greatly improved the steam engine of Thomas Newcomen which had been used to pump water out of mines. Watt developed a reciprocating engine capable of powering a wheel. Although the Watt engine powered cotton mills and a variety of machinery, it was a large stationary engine which could not be used otherwise as the state of boiler technology necessitated the use of low pressure steam acting upon a vacuum in the cylinder and this required a separate condenser with an air pump. Nevertheless, as the construction of boilers improved, Watt investigated the use of high-pressure steam acting directly upon a piston. This raised the possibility of a smaller engine that might then be used to power a vehicle and in 1784 he patented a design for a steam locomotive. His employee, William Murdoch, produced a working model of a self-propelled steam carriage in that year.

A replica of Trevithick’s engine at the National Waterfront Museum, Swansea.

The first full-scale working railway steam locomotive was built in the United Kingdom in 1804 by Richard Trevithick, a British engineer born in Cornwall. This engine used high-pressure steam to drive the engine by one power stroke, whilst the transmission system employed a large flywheel to even out the action of the piston rod. On 21 February 1804, the world’s first steam-powered railway journey took place when Trevithick’s unnamed steam locomotive hauled a train along the tramway of the Penydarren ironworks near Merthyr Tydfil, South Wales. Trevithick later demonstrated a locomotive operating upon a piece of circular rail track in Bloomsbury, London but he never got beyond the experimental stage with railway locomotives, not least because his engines were too heavy for the cast-iron plateway track which was then in use.

The ‘Locomotion’ at Darlington Railway Centre and Museum.

Inspired by earlier locomotives, in 1814 George Stephenson persuaded the manager of the Killingworth colliery where he worked to allow him to build a steam-powered machine. Stephenson played a pivotal role in the development and widespread adoption of the steam locomotive as his designs considerably improved on the work of the earlier pioneers. In 1829 he built the locomotive ‘Rocket’, which entered in and won the Rainfall `trials and this success led to Stephenson establishing his company as the pre-eminent builder of steam locomotives for railways in Great Britain and Ireland, the United States, and much of Europe. Steam power continued to be the dominant power system in railways around the world for more than a century. Since then, manufacturers in this world have developed diesel and electric trains, combining them into more power. We have made high-speed trains and it really is amazing to see the differences which have occurred in such a relatively short space of time!

This week…
There is a well-known phrase “fine words butter no parsnips”. This proverbial phrase dates from the 17th century and expresses the notion that fine words count for nothing, whilst action means more than flattery or promises. These days we aren’t very likely to come across the phrase in modern street slang and it is more likely to be heard in a period costume drama. But the phrase comes from a time before potatoes were imported into Britain from America by John Hawkins in the mid 16th century and became a staple in what established itself as the national dish of meat and two veg. Before that, various root vegetables were eaten instead, often mashed and, as anyone who has eaten mashed swedes, turnips or parsnips can testify, they cry out to be ‘buttered-up’ – another term for flattery. It has even been said that we were known for our habit of layering on butter to all manner of foods, much to the disgust of the French, who used it as evidence of the English lack of expertise regarding cuisine!

Click: Return to top of page or Index page

Some 20th Century Changes

After my research for last week’s blog post on workhouses, I felt that there had to be a bit more to the story, so I continued looking and this led on to law in general. Now that is an absolutely huge subject that I cannot possibly hope to encompass in my blogs, but I can perhaps highlight a few things we have either forgotten or simply were never told about, as the law can be a fascinating insight into the priorities of a particular period in time and quite rightly it is constantly changing. For example, in 1313 MPs were banned from wearing armour or carrying weapons in Parliament, a law which still stands today. Others, such as the monarch’s guards, are still permitted to carry weapons, just not MPs themselves. It’s easy enough to see the chain of events that prompts laws to be written or changed. For example, since around 2013, when driving along a three-lane motorway, rule 264 of the Highway Code states that “You should always drive in the left-hand lane when the road ahead is clear. If you are overtaking a number of slow-moving vehicles, you should return to the left-hand lane as soon as you are safely past.” Middle-lane hogging is when vehicles remain in the middle lane longer than necessary, even when there aren’t any vehicles in the inside lane to overtake. So in a hundred years’ time, this law might be seen as a historical curiosity, although it was heartily welcomed by many drivers. So for rather obvious reasons, some laws are still standing whilst others have been dropped as they are inappropriate in the modern day. But go back to the late 19th century and there were the Locomotive Acts, or Red Flag Acts which were a series of Acts of Parliament which regulated the use of mechanically propelled vehicles on British public highways. The first three, the Locomotives on Highways Act 1861, The Locomotive Act 1865 and Highways and Locomotives (Amendment) Act 1878, contained restrictive measures on the manning and speed of operation of road vehicles. They also formalised many important road concepts like vehicle registration, registration plates, speed limits, maximum vehicle weight over structures such as bridges, and the organisation of highway authorities. The most draconian restrictions and speed limits were imposed by the 1865 Act, also known as the ‘Red Flag’ Act, which required “all road locomotives, including automobiles, to travel at a maximum of 4mph (6.4km/h) in the country and 2mph (3.2km/h) in the city, as well as requiring a man carrying a red flag to walk in front of road vehicles hauling multiple wagons”. However The 1896 Act removed some restrictions of the 1865 Act and also raised the speed to 14mph (23km/h). But first, let us go back to earlier times. For example, the First Act of Supremacy 1534. Over the course of the 1520s and 1530s, Henry VIII passed a series of laws that changed life in England entirely, and the most significant of these was this First Act of Supremacy which declared that Henry VIII was the Supreme Head of the Church of England instead of the Pope, effectively severing the link between the Church of England and the Roman Catholic Church, and providing the cornerstone for the English Reformation. This change was so far-ranging that it is difficult to cover every effect that it had. It meant that England (and ultimately, Britain) would be a Protestant country rather than a Catholic one, with consequences for her allies and her sense of connection to the other countries of Europe. It gave Henry VIII additional licence to continue plundering and shutting down monasteries, which had been huge centres of power in England, with really significant consequences in that their role in alleviating poverty, and providing healthcare and education was lost. It led to centuries of internal and external conflict between the Church of England and other faiths, some of which are still ongoing today. In fact, until 1707, there was no such thing as the United Kingdom. There was England, and there was Scotland, two countries which had shared a monarch since 1603 but which were otherwise legally separate. But by 1707, the situation was becoming increasingly difficult, and union seemed to solve both sides’ fears that they were dangerously exposed to exploitation by the other. England and Scotland had been at each other’s throats since a time before the nations of ‘England’ and ‘Scotland’ even formally existed. The Acts of Union did not bring that to an end right away, but ultimately these ancient enemies became one of the most enduring political unions that has ever existed. That isn’t to say it has gone entirely uncontested as in 2014, a referendum was held on Scottish independence where 55% of voters opted to remain in the union. We should also recall 1807, when the Slave Trade Act was introduced. In fact Britain had played a pivotal role in the international slave trade, though slavery had been illegal in Britain itself since 1102 but with the establishment of British colonies overseas, slaves were used as agricultural labour across the British empire. It was estimated that British ships carried more than three million slaves from Africa to the Americas, second only to the five million slaves which were transported by the Portuguese. The Quakers, or the Religious Society of Friends to give them their proper name, were a nonviolent, pacifist religious movement founded in the mid-17th century who were opposed to slavery from the start of their movement. They pioneered the Abolitionist movement, despite being a marginalised group in their own right. As non-Anglicans, they were not permitted to stand for Parliament. They founded a group to bring non-Quakers on board so as to have greater political influence, as well as working to raise public awareness of the horrors of the slave trade. This was achieved through the publication of books and pamphlets. The effect of the Slave Trade Act, once passed, was rapid. The Royal Navy, which was the leading power at sea at the time, patrolled the coast of West Africa and between 1808 and 1860 freed 150,000 captured slaves. Finally, in 1833, slavery was finally banned throughout the British Empire. In the first few decades of the Industrial Revolution, conditions in British factories were frequently abysmal. 15-hour working days were usual and this included weekends. Apprentices were not supposed to work for more than 12 hours a day, and factory owners were not supposed to employ children under the age of 9, but a parent’s word was considered sufficient to prove a child’s age and even these paltry rules were seldom enforced. Yet the wages that factories offered were still so much better than those available in agricultural labour that there was no shortage of workers willing to put up with these miserable conditions, at least until they had earned enough money to seek out an alternative. It was a similar social movement to the one that had brought an end to slavery that fought child labour in factories, it was also believed that reducing working hours for children would lead to a knock-on effect where working hours for adults would also be reduced. The Factory Act of 1833, among a host of changes, banned children under 9 from working in textile mills, banned children under 18 from working at night, and children between 9 and 13 were not permitted to work unless they had a schoolmaster’s certificate showing they had received two hours’ education per day. So the Factory Act not only improved factory conditions, but also began to pave the way towards education for all. Then, just two years later, a law against cruelty to animals followed. Until 1835, there had been no laws in Britain to prevent cruelty to animals, except one in 1822, which exclusively concerned cattle. Animals were property, and could be treated in whatever way the property-owner wished. It was actually back in 1824 that a group of reformers founded the Society for the Prevention of Cruelty to Animals, which we know today as the RSPCA. Several of those reformers had also been involved in the abolition of the slave trade, such as the MP William Wilberforce. Their initial focus was on working animals such as pit ponies, which worked in mines, but that soon expanded. The Cruelty to Animals Act of 1835, for which the charity’s members lobbied, outlawed bear-baiting and cockfighting, as well as paving the way for further legislation for things such as creating veterinary hospitals, and improving how animals were transported.

The RSPCA began by championing the rights of the humble pit pony.

Prior to the Married Women’s Property Act 1870, when a woman married a man, she ceased to exist as a separate legal being. All of her property prior to marriage, whether accumulated through wages, inheritance, gifts or anything else became his, and any property she came to possess during marriage was entirely under his control, not hers. There were a handful of exceptions, such as money held in trust, but this option was out of reach of all but the very wealthy. Given the difficulty of seeking a divorce at this time, this effectively meant that a man could do whatever he wished with his wife’s money, including leaving her destitute, and she would have very little legal recourse. But the Act changed this. It gave a woman the right to control money she earned whilst married, as well as keeping inherited property, and made both spouses liable to maintain their children from their separate property, something that was important in relation to custody rights on divorce. The Act was not retrospective, so women who had married and whose property had come into the ownership of their husbands were not given it back, which limited its immediate effect. But ultimately, it was a key stage on the long road to equality between men and women in Britain. It is clear that 1870 was a big year in British politics so fas as education was concerned. This is because before then, the government had provided some funding for schools but this was piecemeal and there were plenty of areas where there were simply no school places to be found. This was complicated by the fact that many schools were run by religious denominations, as there was conflict over whether the government should fund schools run by particular religious groups. As has been seen, under the Factory Act 1833 there were some requirements that children should be educated, but these were frequently ignored. Previously, industrialists had seen education as undesirable, at least when focusing on their ‘bottom line’, as hours when children were in education represented hours when they were not able to work. There were some factory jobs that only children could perform, for instance because of their size. But as automation advanced, it increasingly became the case that a lack of educated workers was holding back industrial production so industrialists became a driving force in pushing through comprehensive education. The Education Act of 1870 didn’t provide free education for all, but it did ensure that schools would be built and funded wherever they were needed, so that no child would miss out on an education simply because they didn’t live near a school. We take it for granted now, but free education for all was not achieved until 1944. At the end of the First World War, the Representation of the People Act 1918 is chiefly remembered as the act that gave women the right to vote, but in fact it went further than that. Only 60% of men in Britain had the right to vote prior to 1918, as voting rights were restricted to men who owned a certain amount of property. Elections had been postponed until the end of the First World War and now, in an atmosphere of revolution, Britain was facing millions of soldiers who had fought for their country returning home and being unable to vote. This was clearly unacceptable. As a result, the law was changed so that men aged over 21, or men who had turned 19 whilst fighting in the First World War, were given the vote. But it was also evident that women had contributed hugely to the war effort, and so they too were given the vote under restricted circumstances. The vote was granted to women over 30 who owned property, were graduates voting in a university constituency or who were either a member or married to a member of the Local Government Register. The belief was that this set of limitations would mean that mostly married women would be voting, and therefore that they would mostly vote the same way as their husbands, so it wouldn’t make too much difference. Women were only granted equal suffrage with men in 1928. Then in 1946 came the National Health Service Act. I personally think that we should be proud of our free health service, especially after I learned what residents of some other countries have to do in order to obtain medical care. In 1942, economist William Beveridge had published a report on how to defeat the five great evils of society, these being squalor, ignorance, want, idleness, and disease. Ignorance, for instance, was to be defeated through the 1944 Education Act, which made education free for all children up to the age of 15. But arguably the most revolutionary outcome of the Beveridge Report was his recommendation to defeat disease through the creation of the National Health Service. This was the principle that healthcare should be free at the point of service, paid for by a system of National Insurance so that everyone paid according to what they could afford. One of the principles behind this was that if healthcare were free, people would take better care of their health, thereby improving the health of the country overall. Or to put it another way, someone with an infectious disease would get it treated for free and then get back to work, rather than hoping it would go away, infecting others and leading to lots of lost working hours. It is an idea that was, and remains, hugely popular with the public.


As I said last week, there had been several new laws with the gradual closure of workhouses and by the beginning of the 20th century some infirmaries were even able to operate as private hospitals. A Royal Commission of 1905 reported that workhouses were unsuited to deal with the different categories of resident they had traditionally housed, and it was recommended that specialised institutions for each class of pauper should be established in which they could be treated appropriately by properly trained staff. The ‘deterrent’ workhouses were in future to be reserved for those considered as incorrigibles, such as drunkards, idlers and tramps. In Britain during the early 1900’s, average life span as considered as about 47 for a man and 50 for a woman. By the end of the century, it was about 75 and 80. Life was also greatly improved by new inventions. In fact, even during the depression of the 1930s things improved for most of the people who had a job. Of course we then had the First World War, where so many people lost their lives. So far as the United Kingdom and the Colonies are concerned, during that war there were about 888,000 military deaths (from all causes) and just about 17,000 civilian deaths due to military action and crimes against humanity. There were also around 1,675,000 military wounded. Then during the Second World War, again just in the United Kingdom (including Crown Colonies) there were almost 384,000 military deaths (from all causes), some 67,200 civilian deaths due to military action and crimes against humanity as well as almost 376,000 military wounded. On 24 January 1918 it was reported in the Daily Telegraph that the Local Government Committee on the Poor Law had presented to the Ministry of Reconstruction a report recommending abolition of the workhouses and transferring their duties to other organisations. That same year, free primary education for all children was provided in the UK. Then a few years later the Local Government Act of 1929 gave local authorities the power to take over workhouse infirmaries as municipal hospitals, although outside London few did so. The workhouse system was officially abolished in the UK by the same Act on 1 April 1930, but many workhouses, renamed Public Assistance Institutions, continued under the control of local county councils. At the outbreak of the Second World War in 1939 almost 100,000 people were accommodated in the former workhouses, 5,629 of whom were children. Then the 1948 National Assistance Act abolished the last vestiges of the Poor Law, and with it the workhouses. Many of the buildings were converted into retirement homes run by the local authorities, so by 1960 slightly more than half of local authority accommodation for the elderly was provided in former workhouses. Under the Local Government Act 1929, the boards of guardians, who had been the authorities for poor relief since the Poor Law Amendment Act 1834, were abolished and their powers transferred to county and county borough councils. The basic responsibilities of the statutory public assistance committees set up under the Act included the provision of both ‘indoor’ and ‘outdoor’ relief. Those unable to work on account of age or infirmity were housed in Public Assistance (formerly Poor Law) Institutions and provided with the necessary medical attention, the committee being empowered to act, in respect of the sick poor, under the terms of the Mental Deficiency Acts 1913-27, the Maternity and Child Welfare Act 1918 and the Blind Persons Act 1920, in a separate capacity from other county council committees set up under those Acts. Outdoor relief for the able-bodied unemployed took the form of ‘transitional payments’ by the Treasury, which were not conditional on previous national insurance contributions, but subject to assessment of need by the Public Assistance Committee. The Unemployment Act 1934 transferred the responsibility for ‘transitional payments’ to a national Unemployment Assistance Board (re-named ‘Assistance Board’ when its scope was widened under the Old Age Pensions and Widows Pensions Act, 1940). Payment was still dependent on a ‘means test’ conducted by visiting government officials and, at the request of the government, East Sussex County Council, in common with other rural counties, agreed that officers of its Public Assistance Department should act in this capacity for the administrative county, excepting the Borough of Hove, for a period of eighteen months after the Act came into effect. Other duties of the Public Assistance Committee included the apprenticing and boarding-out of children under its care, arranging for the emigration of suitable persons, and maintaining a register of all persons in receipt of relief. Under the National Health Service Act 1946, Public Assistance hospitals were then transferred to the new regional hospital boards, and certain personal health services to the new Health Committee. The National Insurance Act 1946 introduced a new system of contributory unemployment insurance, national health insurance and contributory pension schemes, under the control of the Ministry of Pensions and National Insurance. Payment of ‘supplementary benefits’ to those not adequately covered by the National Insurance Scheme was made the responsibility of the National Assistance Board under the National Assistance Act 1948 and thus the old Poor Law concept of relief was finally superseded. Under the same Act, responsibility for the residential care of the aged and infirm was laid upon a new statutory committee of the county council, the Welfare Services Committee and the Public Assistance Committee was dissolved. Our world is constantly changing!

This week, an amusing image for a change…

Invisible tape.

Click: Return to top of page or Index page


Whilst researching for this blog post, I learned that a man I was once at school with had written about workhouses in the town I grew up in, so I have included his findings, with thanks. From Tudor times until the Poor Law Amendment Act of 1834, care of the poor was in fact the concern of individual parishes. Over in Whittlesey there are records of meetings of the charity governors between 1737 and 1825 and it is known that a workhouse was in existence in the old Tavern Street (later Broad Street) in 1804 and this building was virtually a hospital for the aged of the town. Before the inception of the Whittlesey Union, the parishes of Whittlesey levied a rate and doled it out as outdoor relief to people in their own homes, but by 1832 there was quite high unemployment among farm workers, especially in the winter, so the rate levied in Whittlesey was very high. At that time the workhouse housed thirty people, mainly the old and orphans, but sometimes able bodied men were taken in during the winter. Then the 1834 Poor Law Act was passed in order to build more workhouses and to make it more difficult for the poor to obtain cash handouts, so a new building was started on what was at that time Bassenhally field. The new workhouse had accommodation for sixty inmates and was also a lodging house for vagrants who wandered from one workhouse to another. The inmates received three meat dinners a week and the children received no education. Then in 1851 the workhouse was extended to accommodate one hundred and fifty inmates and then in 1874 a further extension was added, at a cost of £8,000. This workhouse, also now known as ‘the spike’ because of its clock tower, housed over two hundred people. Whilst they were staying there, men were employed on a farm or sack making, outdoor relief was still available to some people but able-bodied men had to enter the workhouse with their families in order to obtain relief. Men, women and children were segregated although parents had access to their children for one hour per day, whilst single unemployed women were forced into the workhouse to obtain relief. Some people stayed in the workhouse for the rest of their lives, and indeed the copy of the workhouse register in the local museum shows that in the majority of cases the reason people left was death. On Sunday mornings, inmates attended the local St.Mary’s church, husbands and wives were allowed to meet on Sundays but were segregated in the church, the women sitting in the front of the pulpit and the men along the wall on the other side of the north aisle. In the 1920s the main function of the workhouse seems to have been the care of the sick, infirm and elderly women with young children and orphans. Local people were cared for in the main building, but also overnight accommodation was provided in a separate building for tramps and vagrants who were expected to work, chopping wood or picking oaken, the chopped wood being sold to the townsfolk. Then in 1930 the board of guardians was disbanded. At the end of the 1930s the building was used by Coates school whilst its own building was undergoing repairs and shortly afterwards the building was demolished. The need for poor law institutions disappeared with the introduction of the National Assistance Act in 1948 and this founded the National Assistance Board, which was responsible for public assistance. Derived from national insurance contributions, the Board established means-tested supplements for the uninsured. Then in the early 1950s, the Sir Harry Smith school was built on the site. As a result, my old secondary school is on the site of what was at one time a workhouse where children were not taught!

Whittlesey Workhouse cellar, unearthed beneath the car park of Sir Harry Smith School during renovation work in 2011.

Following the Black Death, a devastating pandemic that killed about one-third of England’s population between 1346 and 1352, the Statute of Cambridge in 1388 was an attempt to address the labour shortage. This new law fixed wages and restricted the movement of labourers, as it was anticipated that if they were allowed to leave their parishes for higher-paid work elsewhere then wages would inevitably rise. According to a historian, the fear of social disorder following the plague ultimately resulted in the state, and not a ‘personal Christian charity’, becoming responsible for the support of the poor. The resulting laws against vagrancy were the origins of state-funded relief for the poor. Then from the 16th century onwards a distinction was legally enshrined between those who were willing to work but could not, and those who were able to work but would not, between the genuinely unemployed and the idler. Supporting the destitute was a problem exacerbated by King Henry VIII’s Dissolution of the Monasteries which began in 1536. They had been a significant source of charitable relief and provided a good deal of direct and indirect employment. The Poor Relief Act of 1576 went on to establish the principle that if the able-bodied poor needed support, they had to work for it. Then the Act for the Relief of the Poor Act in 1601 made parishes legally responsible for the care of those within their boundaries who, through either age or infirmity, were unable to work. The Act essentially classified the poor into one of three groups. It proposed that the able-bodied be offered work in a ‘house of correction’, the precursor of the workhouse, where the ‘persistent idler’ was therefore to be punished. It also proposed the construction of housing for the impotent poor, the old and the infirm although most assistance was granted through a form of poor relief known as ‘outdoor relief’. This was in the form of money, food, or other necessities given to those living in their own homes, funded by a local tax on the property of the wealthiest in the parish. In Britain, a workhouse was a total institution where those unable to support themselves financially were offered accommodation and employment. In Scotland, they were usually known as poorhouses. The earliest known use of the term ‘workhouse’ is from 1631, in an account by the mayor of Abingdon, reporting that “we have erected with’n our borough a workhouse to set poorer people to work”. However, as a result of mass unemployment following the end of the Napoleonic Wars in 1815, the introduction of new technology to replace agricultural workers in particular, and a series of bad harvests, meant that by the early 1830s the established system of poor relief was proving to be unsustainable. The New Poor Law of 1834 attempted to reverse the economic trend by discouraging the provision of relief to anyone who refused to enter a workhouse and some Poor Law authorities hoped to run workhouses at a profit by utilising the free labour of their inmates. Most were employed on tasks such as breaking stones and crushing bones to produce fertiliser. As the 19th century wore on, workhouses increasingly became refuges for the elderly, infirm, and sick rather than the able-bodied poor, and in 1929 legislation was passed to allow local authorities to take over workhouse infirmaries as municipal hospitals. Although workhouses were formally abolished by the same legislation in 1930, many continued under their new appellation of Public Assistance Institutions under the control of local authorities. It was not until the introduction of the National Assistance Act of 1948 that the last vestiges of the Poor Law finally disappeared and with them the workhouses.

Poor House, Framlingham Castle.

This ‘Red House’ at Framlingham Castle in Suffolk was founded as a workhouse in 1664. The workhouse system evolved in the 17th century, allowing parishes to reduce the cost to ratepayers of providing poor relief. The first authoritative figure for numbers of workhouses comes in the next century from ‘The Abstract of Returns made by the Overseers of the Poor’, which was drawn up following a government survey in 1776. It put the number of parish workhouses in England and Wales at more than 1,800, or about one parish in seven, with a total capacity of more than 90,000 places. This growth in the number of workhouses was prompted by the Workhouse Test Act of 1723, which obliged anyone seeking poor relief to enter a workhouse and undertake a set amount of work, usually for no pay. This system was called indoor relief and the Act helped prevent irresponsible claims on a parish’s poor rate. The growth was also bolstered by the Relief of the Poor Act in 1782 which was intended to allow parishes to share the cost of poor relief by joining together to form unions, known as Gilbert Unions, to build and maintain even larger workhouses to accommodate the elderly and infirm. The able-bodied poor were instead either given outdoor relief or found employment locally. Workhouses were established and mainly conducted with a view to deriving profit from the labour of the inmates, and not as being the safest means of affording relief by at the same time testing the reality of their destitution. The workhouse was in truth at that time a kind of manufactory, carried on at the risk and cost of the poor-rate, employing the worst description of the people, and helping to pauperise the best. By 1832 the amount spent on poor relief nationally had risen to £7 million a year, more than ten shillings per head of population, up from £2 million in 1784 and the large number of those seeking assistance was pushing the system to the verge of collapse. The economic downturn following the end of the Napoleonic Wars in the early 19th century resulted in increasing numbers of unemployed and coupled with developments in agriculture that meant less labour was needed on the land, along with three successive bad harvests beginning in 1828 and the ‘Swing Riots’ of 1830, reform was inevitable. In 1832 the government established a Royal Commission to investigate and recommend how relief could best be given to the poor. The result was the establishment of a centralised Poor Law Commission in England and Wales under the Poor Law Amendment Act in 1834, also known as the New Poor Law, which discouraged the allocation of outdoor relief to the able-bodied, with all cases offered ‘the house and nothing else’. Individual parishes were grouped into Poor Law Unions, each of which was to have a union workhouse. More than 500 of these were built during the following fifty years, two-thirds of them by 1840. In certain parts of the country there was a good deal of resistance to these new buildings, some of it violent, particularly in the industrial north. Many workers lost their jobs during the major economic depression of 1837, and there was a strong feeling that what the unemployed needed was not the workhouse but short-term relief to tide them over. By 1838, five hundred and seventy-three Poor Law Unions had been formed in England and Wales and these incorporated 13,427 parishes, but it was not until 1868 that unions were established across the entire country. Despite the intentions behind the 1834 Act, relief of the poor remained the responsibility of local taxpayers, and there was thus a powerful economic incentive to use loopholes such as sickness in the family to continue with outdoor relief as the weekly cost per person was about half that of providing workhouse accommodation. Also, outdoor relief was further restricted by the terms of the 1844 Outdoor Relief Prohibitory Order which aimed to end it altogether for the able-bodied poor. As a result, in 1846 of 1.33 million paupers only 199,000 were maintained in workhouses, of whom 82,000 were considered to be able-bodied, leaving an estimated 375,000 of the able-bodied on outdoor relief. Excluding periods of extreme economic distress, it has been estimated that about 6.5% of the British population may have been accommodated in workhouses at any given time. After 1835, many workhouses were constructed with the central buildings surrounded by work and exercise yards enclosed behind brick walls, so-called “pauper bastilles”. The commission proposed that all new workhouses should allow for the segregation of paupers into at least four distinct groups, each to be housed separately between the aged and impotent, children, able-bodied males, and able-bodied females.

The Carlisle Union Workhouse, opened in 1864. It later part of the University of Cumbria.

In 1836 the Poor Law Commission distributed six diets for workhouse inmates, one of which was to be chosen by each Poor Law Union depending on its local circumstances. Although dreary, the food was generally nutritionally adequate and according to contemporary records was prepared with great care. Issues such as training staff to serve and weigh portions were well understood. The diets included general guidance, as well as schedules for each class of inmate. They were laid out on a weekly rotation, the various meals selected on a daily basis, from a list of foodstuffs. For instance, a breakfast of bread and gruel was followed by dinner, which might consist of cooked meats, pickled pork or bacon with vegetables, potatoes, dumpling, soup and suet then rice pudding. Supper was normally bread, cheese and broth, sometimes butter or potatoes. The larger workhouses had separate dining rooms for males and females, but workhouses without separate dining rooms would stagger the meal times to avoid any contact between the sexes. Religion played an important part in workhouse life: prayers were read to the paupers before breakfast and after supper each day. Each Poor Law Union was required to appoint a chaplain to look after the spiritual needs of the workhouse inmates, and he was invariably expected to be from the established Church of England. Religious services were generally held in the dining hall, as few early workhouses had a separate chapel. But in some parts of the country there were more dissenters than members of the established church, as section 19 of the 1834 Poor Law specifically forbade any regulation forcing an inmate to attend church services ‘in a Mode contrary to their Religious Principles’ and the commissioners were reluctantly forced to allow non-Anglicans to leave the workhouse on Sundays to attend services elsewhere, so long as they were able to provide a certificate of attendance signed by the officiating minister on their return. As the 19th century wore on, non-conformist ministers increasingly began to conduct services within the workhouse, but Catholic priests were rarely welcomed. A variety of legislation had been introduced during the 17th century to limit the civil rights of Catholics, beginning with the Popish Recusants Act of 1605 in the wake of the failed Gunpowder Plot that year. Though almost all restrictions on Catholics in England and Ireland were removed by the Roman Catholic Relief Act of 1829, a great deal of anti-Catholic feeling remained. Even in areas with large Catholic populations the appointment of a Catholic chaplain was unthinkable. Some guardians went so far as to refuse Catholic priests entry to the workhouse. The education of children presented a dilemma. It was provided free in the workhouse, but had to be paid for by the ‘merely poor’. Instead of being ‘less eligible’, conditions for those living in the workhouse were in certain respects ‘more eligible’ than for those living in poverty outside. By the late 1840s, most workhouses outside London and the larger provincial towns housed only those considered to be the incapable, elderly and sick. By the end of the century only about twenty per cent of those admitted to workhouses were unemployed or destitute, but about thirty per cent of the population over 70 were in workhouses. Responsibility for administration of the poor passed to the Local Government Board in 1871 and the emphasis soon shifted from the workhouse as a receptacle for the helpless poor to its role in the care of the sick and helpless. The Diseases Prevention Act of 1883 allowed workhouse infirmaries to offer treatment to non-paupers as well as inmates. The introduction of pensions in 1908 for those aged over 70 did not reduce the number of elderly housed in workhouses, but it did reduce the number of those on outdoor relief by twenty-five per cent. By the beginning of the 20th century some infirmaries were even able to operate as private hospitals. A Royal Commission of 1905 reported that workhouses were unsuited to deal with the different categories of resident they had traditionally housed, and recommended that specialised institutions for each class of pauper should be established, in which they could be treated appropriately by properly trained staff. The ‘deterrent’ workhouses were in future to be reserved for those considered as incorrigibles, such as drunkards, idlers and tramps. On 24 January 1918 the Daily Telegraph reported that the Local Government Committee on the Poor Law had presented to the Ministry of Reconstruction a report recommending abolition of the workhouses and transferring their duties to other organisations. That same year, free primary education for all children was provided in the UK. Then the Local Government Act of 1929 gave local authorities the power to take over workhouse infirmaries as municipal hospitals, although outside London few did so. The workhouse system was officially abolished in the UK by the same Act on 1 April 1930, but many workhouses, renamed Public Assistance Institutions, continued under the control of local county councils. At the outbreak of the Second World War in 1939 almost 100,000 people were accommodated in the former workhouses, 5,629 of whom were children. Then the 1948 National Assistance Act abolished the last vestiges of the Poor Law, and with it the workhouses. Many of the buildings were converted into retirement homes run by the local authorities, so by 1960 slightly more than half of local authority accommodation for the elderly was provided in former workhouses. Camberwell workhouse in Peckham, South London continued until 1985 as a homeless shelter for more than 1,000 men, operated by the Department of Health and Social Security and renamed a resettlement centre. Southwell workhouse, also known as Greet House, in Southwell, Nottinghamshire is now a museum but was used to provide temporary accommodation for mothers and children until the early 1990s. How often we can pass by these buildings and not give a thought to their historical significance.

This week, a thought.
Life is a presentation of choices. Wherever you are now exactly represents the sum of your previous decisions, actions and inactions.

Click: Return to top of page or Index page

The Bright Side

Right now it is approaching the end of January and this week’s blog is slightly longer than usual. Last month we had the shortest day of the year in terms of daylight, which means quite a few people should begin to feel happier now that our sun rises at an earlier time each day! Of course, we become used to that and then the clocks in the UK go forward an hour. Moon phases reveal the passage of time in the night sky and on some nights when we look up at the moon it is full and bright, whilst sometimes it is just a sliver of silvery light. These changes in appearance are the phases of the moon and as the moon orbits Earth, it cycles through eight distinct phases. The four primary phases of the moon occur about one week apart, with the full moon its most dazzling stage. For example we had a New Moon on January 2, a First Quarter on January 9, a Full Moon on January 17 and a Last Quarter on January 25. Then we are back to a New Moon on February 1, which will signal the beginning of the Lunar New Year. This is also called Chinese New Year and will signal the ‘Year Of The Water Tiger’. A New Moon is when our satellite is between the Earth and the Sun, so it’s not visible to us. Technology has progressed so much now and more folk take such excellent photos of the moon and other stellar objects. In addition, we can share these with family, friends, almost anyone we wish to via the Internet technology so many of us can access. But not everyone has either the access to or even the wish to use things like Facebook, Messenger, WhatsApp, Zoom and so many others too numerous to mention. In my early days of taking photographs I used a very simple and straightforward camera, a Kodak Instamatic. Once used, I would take the film in to a local camera shop where the film would be developed and a few days later I would return to that shop to collect my pictures. Sometimes I would be pleased with the results and other times not, but I could at times get some advice a friendly shop assistant who was a photographer. I really did learn much in those early days and I am grateful even now for the help I received. I have written in a previous blogs about the different cameras I have had, from the basic ‘point and click’ up to the modern Single Lens Reflex (SLR) ones where a prism and mirror system is used to view an image exactly before it is stored electronically on a memory card. Images can now be modified, linked, such things as their brightness and contrast adjusted, all at the click of a button. Videos are made quickly and easily using simple smart phones, even if they only last a few seconds. I have no doubt that in time, more ideas will provide what to many will be seen as bigger as well as better. But we should surely not lose sight of the past, the basics, the simple ideas. There are many who will come up with new ideas, but they cannot be expected to create them at will. Likewise, those with new ideas give rise to further developments. As an example, I have previously mentioned in an earlier blog post about Guy Fawkes and the Gunpowder Plot.

Gunpowder is the first explosive to have been developed. Popularly listed as one of the ‘Four Great Inventions’ of China, the others being the compass, paper making and printing. Gunpowder was invented during the late Tang Dynasty in the 9th century, whilst the earliest recorded chemical formula for gunpowder dates to the Song Dynasty of the 11th century. The knowledge of gunpowder spread rapidly throughout Asia, the Middle East and Europe, possibly as a result of Mongol conquests during the 13th century, with written formulas for it appearing in the Middle East between 1240 and 1280 in a treatise by Hasan-al-Rammah and in Europe by 1267 in the ‘Opus Majus’ by Roger Bacon. It was employed in warfare to some effect from at least the 10th century in weapons such as fire arrows, bombs and the fire lance before the appearance of the gun in the 13th century. In fact whilst the fire lance was eventually supplanted by the gun, some other gunpowder weapons such as rockets and fire arrows continued to see use in China, Korea, India, and eventually Europe. Gunpowder has also been used for non-military purposes such as fireworks for entertainment, as well as in explosives for mining and tunnelling. The evolution of guns then led to the development of large artillery pieces, popularly known as bombards, during the 15th century and pioneered by states such as the Duchy of Burgundy. Firearms came to dominate early modern warfare in Europe by the 17th century and the gradual improvement of cannons firing heavier rounds for a greater impact against fortifications led to the invention of the star fort and the bastion in the Western world, where traditional city walls and castles were no longer suitable for defence. The use of gunpowder technology also spread throughout the Islamic world as well as to India, Korea and Japan. The use of gunpowder in warfare during the course of the 19th century diminished due to the invention of smokeless powder and as a result, gunpowder is often referred to nowadays as ‘black powder’ to distinguish it from the propellant used in contemporary firearms.

A Chinese fire arrow utilising a bag of gunpowder as incendiary,
c. 1390.

The earliest reference to gunpowder seems to have appeared in 142AD during the Eastern Han dynasty. It is said that an alchemist by the name of Wei Boyang was known as the ‘father of alchemy’ and wrote about a substance with gunpowder-like properties which described a mixture of three powders that would “fly and dance” violently in the ‘Book of the Kinship of Three’, a Taoist text on the subject of alchemy. However, Wei Boyang is considered to be a semi-legendary figure meant to represent a ‘collective unity’, and was probably written about in stages from the Han dynasty to 450AD. Although not specifically named, the powders were almost certainly the ingredients of gunpowder and no other explosive known to scientists is composed of such powders. Whilst it was almost certainly not their intention to create a weapon of war, Taoist alchemists continued to play a major role in the development of gunpowder due to their experiments with sulphur and saltpetre, although one historian has considered that despite the early association of gunpowder with Taoism, this may be a quirk of historiography and a result of the better preservation of texts associated with Taoism, rather than being a subject limited to only Taoists. Their quest for the elixir of life certainly attracted many powerful patrons, one of whom was Emperor Wu of Han. The next reference to gunpowder occurred in the year 300AD during the Jin dynasty and a Taoist philosopher wrote down all of the ingredients of gunpowder in his surviving works, collectively known as the ‘Baopuzi’. In 492AD, some Taoist alchemists noted that saltpetre, one of the most important ingredients in gunpowder, burns with a purple flame allowing for practical efforts at purifying the substance and during the Tang dynasty, alchemists used saltpetre in processing the four yellow drugs, namely sulphur, realgar, orpiment and arsenic trisulphide. Taoist text warned against an assortment of dangerous formulas, one of which corresponds with gunpowder, in fact alchemists called this discovery ‘fire medicine’ and the term has continued to refer to gunpowder in China into the present day, a reminder of its heritage as a side result in the search for longevity increasing drugs. A book published in 1185AD called ‘Gui Dong’, The Control of Spirits, also contains a story about a Tang dynasty alchemist whose furnace exploded, but it is not known if this was caused by gunpowder. The earliest surviving chemical formula of gunpowder dates to 1044AD in the form of the military manual, known in English as the ‘Complete Essentials for the Military Classics’, which contains a collection of facts on Chinese weaponry. However this edition has since been lost and the only currently extant copy is dated to 1510AD during the Ming dynasty. Gunpowder technology also spread to naval warfare and in 1129AD it was decreed that all warships were to be fitted with trebuchets for hurling gunpowder bombs.

By definition, a gun uses the explosive force of gunpowder to propel a projectile from a tube so cannons, muskets, and pistols are therefore typical examples. In 1259AD a type of fire-emitting lance was made from a large bamboo tube, with a pellet wad stuffed inside it. Once the fire goes off, it completely spews the rear pellet wad forward, and it has been said that “the sound is like a bomb that can be heard for five hundred or more paces”. The pellet wad mentioned is possibly the first true bullet in recorded history. Fire lances transformed from the bamboo, wood or paper-barrelled firearm to the metal-barrelled firearm in order to better withstand the explosive pressure of gunpowder. From there it branched off into several different gunpowder weapons known as ‘eruptors’ in the late 12th and early 13th centuries. The oldest extant gun whose dating is unequivocal is the Xanadu Gun because it contains an inscription describing its date of manufacture corresponding to 1298AD. It is so called because it was discovered in the ruins of Xanadu, the Mongol summer palace in Inner Mongolia. The design of the gun includes axial holes in its rear which some speculate could have been used in a mounting mechanism. Another specimen, the Wuwei Bronze Cannon, was discovered in 1980 and may possibly be the oldest as well as largest cannon of the 13th century though a similar weapon was discovered in 1997, but much smaller in size. So it seems likely that the gun was born sometime during the 13th century. Gunpowder may have been used during the Mongol invasions of Europe, as shortly after the Mongol invasions of Japan which was from 1274AD to 1281AD, the Japanese produced a scroll painting depicting a bomb and is speculated to have been the Chinese thunder crash bomb. Japanese descriptions of the invasions also talk of iron and bamboo ‘pao’, causing light and fire and emitting 2 to 3,000 iron bullets.

A Swiss soldier firing a hand cannon late 14th, 15th centuries.
Illustration produced in 1874.

A common theory of how gunpowder came to Europe is that it made its way along the Silk Road, through the Middle East. Another is that it was brought to Europe during the Mongol invasion in the first half of the 13th century. Some sources claim that Chinese firearms and gunpowder weapons may have been deployed by Mongols against European forces at the Battle of Mohi in 1241AD, it may also have been due to subsequent diplomatic and military contacts. Some authors have speculated that William of Rubruck, who served as an ambassador to the Mongols from 1253AD to 1255AD, was a possible intermediary in the transmission of gunpowder. The 1320s seem to have been the takeoff point for guns in Europe according to most modern military historians. Scholars suggest that the lack of gunpowder weapons in a well-traveled Venetian’s catalogue for a new crusade in 1321AD implies that guns were unknown in Europe up until this point but guns spread rapidly across Europe There was a French raiding party that sacked and burned Southampton in 1338AD who brought with them a ribaudequin, a late medieval volley gun with many small-calibre iron barrels set up in parallel on a platform. It was in use during the 14th and 15th centuries and when the gun was fired in a volley, it created a shower of iron shot. But the French brought only 3 pounds of gunpowder. Around the late 14th century European and Ottoman guns began to deviate in purpose and design from guns in China, changing from small anti-personnel and incendiary devices to the larger artillery pieces most people imagine today when using the word “cannon”, If the 1320s can be considered the arrival of the gun on the European scene, then the end of the 14th century may very well be the departure point from the trajectory of gun development in China. In the last quarter of the 14th century, European guns grew larger and began to blast down fortifications.

In India, gunpowder technology is believed to have arrived by the mid-14th century, but could have been introduced much earlier by the Mongols, who had conquered both China and some borderlands of India, perhaps as early as the mid-13th century. The unification of a large single Mongol Empire resulted in the free transmission of Chinese technology into Mongol conquered parts of India. Regardless, it is believed that the Mongols used Chinese gunpowder weapons during their invasions of India. The first gunpowder device, as opposed to naphtha-based pyrotechnics, introduced to India from China in the second half of the 13th century, was a rocket called the ‘hawai’. The rocket was used as an instrument of war from the second half of the 14th century onward, and the Delhi sultanate as well as Bahmani kingdom made good use of them.

‘Mons Meg’, a medieval Bombard weapon built in 1449.
Located in Edinburgh Castle.

As a response to gunpowder artillery, European fortifications began displaying architectural principles such as lower and thicker walls in the mid-1400s. Cannon towers were built with artillery rooms where cannons could discharge fire from slits in the walls. However this proved problematic as the slow rate of fire, reverberating concussions, and noxious fumes produced greatly hindered defenders. Gun towers also limited the size and number of cannon placements because the rooms could only be built so big. The star fort, also known as the bastion fort, was a style of fortification that became popular in Europe during the 16th century. These were developed in Italy and became widespread in Europe. The main distinguishing features of the star fort were its angle bastions, each placed to support their neighbour with lethal crossfire, covering all angles, making them extremely difficult to engage with and attack. By the 1530s the bastion fort had become the dominant defensive structure in Italy. Outside Europe, the star fort became an ‘engine of European expansion’ and acted as a force multiplier so that small European garrisons could hold out against numerically superior forces. Wherever star forts were erected, the natives experienced great difficulty in uprooting European invaders. In China, bastion forts were advocated for the construction so that their cannons could better support each other. Gun development and design in Europe reached its most classic form in the 1480s, as they were longer, lighter, more efficient, and more accurate compared to predecessors only three decades prior and the design persisted. The two primary theories for the appearance of the classic gun involve the development of gunpowder corning and a new method for casting guns. The ‘corning’ hypothesis stipulates that the longer barrels came about as a reaction to the development of corned gunpowder. Not only did corned powder keep better, because of its reduced surface area, but gunners also found that it was more powerful and easier to load into guns. Prior to corning, gunpowder would also frequently de-mix into its constitutive components and was therefore unreliable. The faster gunpowder reaction was suitable for smaller guns, since large ones had a tendency to crack, and the more controlled reaction allowed large guns to have longer, thinner walls. In India, guns made of bronze have been recovered from Calicut (1504AD) and Diu (1533AD). By the 17th century a diverse variety of firearms were being manufactured in India, large guns in particular. Gujarat supplied saltpetre in Europe for use in gunpowder warfare during the 17th century and the Dutch, French, Portuguese, and English used Chāpra as a centre of saltpetre refining. Aside from warfare, gunpowder was used for hydraulic engineering in China by 1541. Gunpowder blasting followed by dredging of the detritus was a technique which Chen Mu employed to improve the Grand Canal at the waterway where it crossed the Yellow River. In Europe, it was utilised in the construction of the ‘Canal du Midi’ in Southern France and which was completed in 1681 and linked the Mediterranean sea with the Atlantic with 240km of canal and 100 locks. But before gunpowder was applied to civil engineering, there were two ways to break up large rocks, by hard labour or by heating with large fires followed by rapid quenching. The earliest record for the use of gunpowder in mines comes from Hungary in 1627AD. It was introduced to Britain in 1638AD by German miners, after which time records are numerous but until the invention of the safety fuse in 1831, the practice was extremely dangerous. Another reason for danger were the dense fumes given off and the risk of igniting flammable gas when used in coal mines. Gunpowder was also extensively used in railway construction. At first railways followed the contours of the land, or crossed low ground by means of bridges and viaducts, but later railways made extensive use of cuttings and tunnels. One 2400-ft stretch of the 5.4 mile Box Tunnel on the Great Western Railway line between London and Bristol consumed a ton of gunpowder per week for over two years. Then there is the Fréjus Rail Tunnel, also called Mont Cenis Tunnel, which is a rail tunnel some 8.5 miles (13.7 kilometres) length in the European Alps, carrying the Turin-Modane railway through Mont Cenis to an end-on connection with the Cult-Modane railway and linking Bardonecchia in Italy to Modane in France. The tunnel was completed in 13 years starting in 1857AD but, even with black powder, progress was only 25 centimetres a day until the invention of pneumatic drills, which speeded up the work. However, the latter half of the 19th century saw the invention of nitroglycerin along with nitrocellulose and smokeless powders which soon replaced traditional gunpowder in most civil and military applications. Believe it or not, there is so much more to tell on this subject and as you can see, we have learned and developed so much over the centuries. I am sure we will continue to do so. As always, I hope that it will be for the good, for the benefit of all.

This week…
A great deal has been written about marriage. I once saw the following quote: “Marriage is an institution – but not everyone wants to live in an institution”. Another is “Marriage can be like a deck of cards. At the beginning, all you need are two hearts and a diamond, but in the end you wish you had a club and a spade”…

Click: Return to top of page or Index page

Accepting Change

As we go through life, change is all around us, day by day. I have written about our universe, our sun and the planets, including this Earth which has changed over millions of years. Though it is only a relatively short space of time that as humans we have been recording these developments, with the technology we have developed and the skills we have learned we can look back and see what has occurred. But these changes are continuing and we are having a marked effect on them. This Earth still spins, seasons change, life ends and new life begins. In many species it develops, as it adapts to the changes that are made. But many species are no longer with us. I was watching an item on YouTube about changes being made to the Catthorpe Interchange on the M1 motorway and how newts had been discovered there. As a result, changes were made to the area where they were in order to preserve it whilst they grew and eventually moved. A newt is a form of salamander, which are a group of amphibians typically characterised by their lizard-like appearance, with slender bodies, blunt snouts, short limbs projecting at right angles to the body, and the presence of a tail in both larvae and adults. Their diversity is highest in the Northern Hemisphere. They rarely have more than four toes on their front legs and five on their rear legs, but some species have fewer digits and others lack hind limbs. Their permeable skin usually makes them reliant on habitats in or near water or other cool, damp places. Some species are aquatic throughout their lives, some take to the water intermittently, and others are entirely terrestrial as adults. They are capable of regenerating lost limbs as well as other damaged parts of their bodies and researchers hope to reverse engineer this remarkable regenerative processes for potential human medical applications, such as brain and spinal cord injury treatment or preventing harmful scarring during heart surgery recovery. The skin of some species contains the powerful poison tetrodotoxin and as a result these salamanders tend to be slow-moving and have a bright warning colouration in order to advertise their toxicity. Salamanders typically lay eggs in water and have aquatic larvae, but great variation occurs in their lifecycles. Newts metamorphose through three distinct developmental life stages: aquatic larva, terrestrial juvenile (eft), and adult. Adult newts have lizard-like bodies and return to the water every year to breed, otherwise living in humid, cover-rich land habitats. They are therefore semiaquatic, alternating between aquatic and terrestrial habitats. Not all aquatic salamanders are considered newts, however. More than 100 known species of newts are found in North America, Europe, North Africa and Asia. Newts are threatened by habitat loss, fragmentation and pollution. Several species are endangered, and at least one species, the Yunnan lake newt, has recently become extinct as it was only found in the shallow lake waters and adjacent freshwater habitats near the Kunming Lake in Yunnan, China. The Old English name of the animal was ‘efte’ or ‘efeta’ (of unknown origin), resulting in the Middle English ‘eft. This word was transformed irregularly into ‘euft’, ‘evete’ or ‘ewt(e)’. The initial “n” was added from the indefinite article “an” by the early 15th century. The form “newt” appears to have arisen as a dialectal variant of ‘eft’ in Staffordshire, but entered Standard English by the Early Modern period where it was used by Shakespeare in ‘Macbeth’. The regular form ‘eft’, now only used for newly metamorphosed specimens, survived alongside ‘newt’, especially in composition, the larva being called “water-eft” and the mature form “land-eft” well into the 18th century, but the simplex ‘eft’ as equivalent to “water-eft” has been in use since at least the 17th century. Dialectal English and Scots also has the word ‘ask’’, also ‘awsk’ and ‘esk’ in Scots used for both newts and wall lizards from Old English, from photo-Germanic , literally ‘lizard-badger’ or ‘distaff-like lizard’. Latin had the name ‘stellio’ for a type of spotted newt, Ancient Greek had the name κορδύλος, presumably for the water newt (immature newt, or eft). German has ‘Molch’, from Middle High German. Newts are also known as ‘Tritones’, named for the mythological Triton in historical literature, and ‘triton’ remains in use as common name in some Romance languages, in Greek, in Romanian, Russian, and Bulgarian.

The Pyrenean brook newt lives in small streams in the mountains.

Newts are found in North America, Europe, North Africa and Asia. The Pacific newts and the Eastern newts are amongst the seven representative species in North America, whilst most diversity is found in the Old World. In Europe and the Middle East, eight genera with roughly 30 species are found, with the ribbed newts extending to northernmost Africa. Eastern Asia, from Eastern India over Indochina to Japan, is home to five genera with more than 40 species. As I have mentioned, newts are semiaquatic, spending part of the year in the water for reproduction and the rest of the year on land. Whilst most species prefer areas of stagnant water such as ponds, ditches or flooded meadows for reproduction, some species such as the Danube Crested newt can also occur in slow-flowing rivers. In fact the European brook newts and European mountain newts have even adapted to life in cold, oxygen-rich mountain streams. During their terrestrial phase, newts live in humid habitats with abundant cover such as logs, rocks, or earth holes. Newts share many of the characteristics of their salamander kin, including semipermeable glandular skin, four equal-sized limbs, and a distinct tail. However, the skin of the newt is not as smooth as that of other salamanders. The cells at the site of an injury have the ability to un-differentiate, reproduce rapidly and differentiate again to create a new limb or organ. One hypothesis is that the un-differentiated cells are related to tumour cells, since chemicals that produce tumours in other animals will produce additional limbs in newts. In terms of development, the main breeding season for newts in the Northern Hemisphere is in June and July. After courtship rituals of varying complexity, which take place in ponds or slow-moving streams, the male newt transfers a spermatophore, which is taken up by the female. Fertilised eggs are laid singly and are usually attached to aquatic plants. This distinguishes them from the free-floating eggs of frogs or toads, which are laid in clumps or in strings. Plant leaves are usually folded over and attached to the eggs to protect them. The larvae, which resemble fish fry but are distinguished by their feathery external gills, hatch out in about three weeks. After hatching, they eat algae, small invertebrates, or other amphibian larvae. During the subsequent few months, the larvae undergo metamorphosis, during which they develop legs, whilst the gills are absorbed and replaced by air-breathing lungs. At this time some species, such as the North American newts, also become more brightly coloured. Once fully metamorphosed, they leave the water and live a terrestrial life, when they are known as ‘efts’. Only when the eft reaches adulthood will the North American species return to live in water, rarely venturing back onto the land. Conversely, most European species live their adult lives on land and only visit water to breed.

The Pacific newt is known for its toxicity.

Many newts produce toxins in their skin secretions as a defence mechanism against predators. ‘Taricha’ newts of western North America are particularly toxic and the rough-skinned newt of the Pacific Northwest actually produces more than enough tetrodotoxin to kill an adult human. In fact some native Americans of the Pacific Northwest used the toxin to poison their enemies! However, the toxins are only dangerous if ingested or otherwise enter the body, for example through a wound. Newts can safely live in the same ponds or streams as frogs and other amphibians and most newts can be safely handled, provided the toxins they produce are not ingested or allowed to come in contact with mucous membranes or breaks in the skin. I have also learned that newts, as with salamanders in general and other amphibians, serve as bioindicators and this is because their thin, sensitive skin and evidence of their presence (or absence) can serve as an indicator of the health of the environment. Most species are highly sensitive to subtle changes in the pH level of the streams and lakes where they live. Because their skin is permeable to water, they absorb oxygen and other substances they need through their skin. This is why scientists carefully study the stability of the amphibian population when studying the water quality of a particular body of water.

But of course that is just one example of changes on this Earth and this to me is why it is so very important to be aware of change. I know that change occurs all the time, change is healthy in so many ways. But so often people make changes in a very selfish way, with no thought as to what impact it may have, whether it be on the people around us, on the plants and animals, even to Earth itself. It has been said that many years ago an animal was left on an island, albeit by accident perhaps, but but that single animal then preyed on a species local to the island and wiped the species out completely. But the animal could not have been brought to that island without human intervention. We have very strict controls on our borders, as most if not all countries do, and yet there are those who flout the rules, not thinking that the rules should apply to them. A while ago I learned of Birds Nest soup, called the ‘Caviar of the East’ but rather than being made from twigs and bits of moss, it is made from the hardened saliva from Swiftlets and dissolved in a broth. It is a Chinese delicacy, is high in minerals like calcium, magnesium and potassium and is extremely rare and valuable. However, because it is an animal product, it is subject to strict import restrictions, particularly with regard to H5N1 avian flu, which could cause an epidemic if brought in to another country. But some people attempt to bring this item over from such places as China, hiding it in their luggage, even though they are warned not to. It is potentially dangerous to bring such items into another country because of the harm it can do. Nowadays we travel around the world far more easily, we can get on an aircraft and be on the other side of the world in a matter of hours, a trip that would at one time have taken us weeks. I was fortunate enough to have a superb holiday a few years ago which took me around the world to Australia and New Zealand, then up to the United States of America, with a number of superb stopping-off places in between. A few years before I had flown to the U.S.A. and wherever I went, the same strict border controls were in place. In fact, prior to my long cruise I had to have a few vaccinations, with proof that I had done so and whilst boarding at Southampton a few passengers were not permitted to travel because they had not been vaccinated. As a result, they had to make their own way to our next port of call, which was Tenerife, after they had been. Right now we are still in the midst of a pandemic, though there are those who have differing views on it, both in terms of its effect and its treatment. Only time will tell. As expected, it is having a marked effect on us, on our daily lives, the health and welfare of everyone. It is changing how we live, how we interact with family and friends and how we cope. Some I know are coping better than others. There is no doubt that we all have a collective responsibility to manage in these troubled times, to believe the people who are skilled in medicine and not be swayed by the people who only think selfishly of themselves. As I wrote in a blog post last year, some folk want the newest, the latest things, they treasure possessions whilst others consider money itself to be important. It does not matter what country they are from. There are those who say that money is the root of all evil, but they are in fact misquoting from the Bible, as the correct version is “For the love of money is the root of all of evil: which while some coveted after, they have erred from the faith, and pierced themselves through with many sorrows”. ~ 1 Timothy 6:10. So it is not the love of money, it is what is done with it that matters. We read so often how people seek both peace and contentment and it is often those who lead a simpler life without many possessions who are, as they have enough food and clothing for themselves and they do what they can to help others. They give thanks every day for all things in their lives, the good and the not so good. Such folk are content. But if those who have money would share it with those who have less, even if it was to simply increase a worker’s basic wage, it would make such a tremendous difference. That is a change which many would gladly accept.

This week I was reminded…
Of a large shop in Peterborough which, many years ago, clearly had a central cash office. As I recall, to pay for goods your money was handed over to an assistant who put it, along with an invoice, into a plastic container. This was placed into a pneumatic pipe system which went between departments, the container whizzed along, your cash was taken and a receipt returned in the same way. I believe that in some areas, even telephone exchanges used them. It is nothing like the electronic systems we use today!

Click: Return to top of page or Index page

Human Evolution

Last week I wrote about how the Sun, along with the planets, were thought to have been formed. This time I will say a bit more about the colonisation of the land, a bit about extinctions and then talk about our human evolution and its history.

An artist’s conception of Devonian flora.

I said last week that the Huronian ice age might have been caused by the increased oxygen concentration in the atmosphere, which caused the decrease of methane (CH4) in the atmosphere. Methane is a strong greenhouse gas, but with oxygen it reacts to form CO2, a less effective greenhouse gas. Oxygen accumulation from photosynthesis resulted in the formation of an ozone layer that absorbed much of the Sun’s ultraviolet radiation, meaning unicellular organisms that reached land were less likely to die, and as a result Prokaryotes began to multiply and became better adapted to survival out of the water. These microscopic single-celled organisms have no distinct nucleus with a membrane and include bacteria. These organisms colonised the land, then along came Eukaryotes, an organism consisting of a cell or cells in which the genetic material is DNA in the form of chromosomes contained within a distinct nucleus. For a long time, the land remained barren of multicellular organisms. The supercontinent Pannotia formed around 600Ma (that is 600 million years ago) and then broke apart a short 50 million years later. Fish, the earlier vertebrates, evolved in the oceans around 530Ma. A major extinction event occurred near the end of the Cambrian period, which ended 488 Ma. Several hundred million years ago plants, probably resembling algae and fungi, started growing at the edges of the water, and then out of it. The oldest fossils of land fungi and plants date to around 480 to 460Ma, though molecular evidence suggests the fungi may have colonised the land as early as 1,000Ma and the plants 700Ma. Initially remaining close to the water’s edge, mutations and variations resulted in further colonisation of this new environment. The timing of the first animals to leave the oceans is not precisely known, but the oldest clear evidence is of arthropods on land around 450Ma, perhaps thriving and becoming better adapted due to the vast food source provided by the terrestrial plants. There is also unconfirmed evidence that arthropods may have appeared on land as early as 530Ma. The first of five great mass extinctions was the Ordovician-Silurian extinction and its possible cause was the intense glaciation of Gondwana, which eventually led to a snowball Earth where some 60% of marine invertebrates became extinct. The second mass extinction was the Late Devonian extinction, probably caused by the evolution of trees, which could have led to the depletion of greenhouse gases like CO2 or the eurotrophication, the process by which an entire body of water or parts of it, became progressively enriched with minerals and nutrients. It has also been defined as “nutrient-induced increase in phytoplankton productivity”. This meant that 70% of all species became extinct. The third mass extinction was the Permian-Triassic, or the Great Dying event, possibly caused by some combination of the Siberian Traps volcanic event, an asteroid impact, methane hydrate gasification, sea level fluctuations and a major anoxic event. In fact, either the Wilkes Land Crater in Antarctica or the Bedout structure off the northwest coast of Australia may indicate an impact connection with the Permian-Triassic extinction. But it remains uncertain whether either these or other proposed Permian-Triassic boundary craters are either real impact craters or even contemporaneous with the Permian-Triassic extinction event. This was by far the deadliest extinction ever, with about 57% of all families and 83% of all living organisms were killed. The fourth mass extinction was the Triassic-Jurassic extinction event in which almost all small creatures became extinct, probably due to new competition from dinosaurs, who were the dominant terrestrial vertebrates throughout most of the Mesozoic period. After yet another, the most severe extinction of the period around 230Ma, dinosaurs split off from their reptilian ancestors. The Triassic-Jurassic extinction event at 200Ma spared many of the dinosaurs and they soon became dominant among the vertebrates. Though some mammalian lines began to separate during this period, existing mammals were probably small animals resembling shrews. The boundary between avian and non-avian dinosaurs is not clear, but Archaeopteryx, traditionally considered one of the first birds, lived around 150Ma. The earliest evidence for evolving flowers is during the Cretaceous period, some 20 million years later around 132Ma. Then the fifth and most recent mass extinction was the K-T extinction. In 66Ma, a 10-kilometre (6.2 mile) asteroid struck Earth just off the Yucatan Peninsula, somewhere in the southwestern tip of then Laurasia and where the Chicxlub crater in Mexico is today. This ejected vast quantities of particulate matter and vapour into the air that occluded sunlight, inhibiting photosynthesis. 75% of all life, including the non-avian dinosaurs, became extinct, marking the end of the Cretaceous period and Mesozoic era.

Yucatan Chicxlub Crater in Mexico.

A small African ape living around 6Ma (6 million years ago) was the last animal whose descendants would include both modern humans and their closest relatives, the chimpanzees, and only two branches of its family tree have surviving descendants. Very soon after the split, for reasons that are still unclear, apes in one branch developed the ability to walk upright. Brain size increased rapidly, and by 2Ma the first animals classified in the genus Homo had appeared. Of course, the line between different species or even genera is somewhat arbitrary as organisms continuously change over generations. Around the same time, the other branch split into the ancestors of the common chimpanzee and the ancestors of the bonobo as evolution continued simultaneously in all life forms. The ability to control fire probably began in Homo Erectus, probably at least 790,000 years ago but perhaps as early as 1.5Ma, but it is possible that the use and discovery of controlled fire may even predate Homo Erectus and fire was possibly used by the early Lower Palaeolithic. It is more difficult to establish the origin of language and it is unclear as to whether Homo Erectus could speak or if that capability had not begun until Homo sapiens. As brain size increased, babies were born earlier, before their heads grew too large to pass through the pelvis. As a result, they exhibited more plasticity and thus possessed an increased capacity to learn and required a longer period of dependence. Social skills became more complex, language became more sophisticated and tools became more elaborate. This contributed to further cooperation and intellectual development. Modern humans are believed to have originated around 200,000 years ago or earlier in Africa as the oldest fossils date back to around 160,000 years ago. The first humans to show signs of spirituality are the Neanderthals, usually classified as a separate species with no surviving descendants. They buried their dead, often with no sign of food or tools. But evidence of more sophisticated beliefs, such as the early Cro-Magnon cave paintings, probably with magical or religious significance, did not appear until 32,000 years ago. Cro-Magnons also left behind stone figurines such as Venus of Willendorf, probably also signifying religious belief. By 11,000 years ago, Homo sapiens had reached the southern tip of South America, the last of the uninhabited continents, except for Antarctica which remained undiscovered until 1820 AD). Tool use and communication continued to improve, and interpersonal relationships became more intricate. Throughout more than 90% of its history, Homo sapiens lived in small bands as nomadic hunter-gatherers. It has been thought that as language became more complex, the ability to remember as well as to communicate information resulted so ideas could be exchanged quickly and passed down the generations. Cultural evolution quickly outpaced biological evolution and history proper began. It seems that between 8,500BC and 7,000BC, humans in the Fertile Crescent area of the Middle East began the systematic husbandry of plants and animals, so true agriculture began This spread to neighbouring regions, and developed independently elsewhere until most Homo sapiens lived sedentary lives in permanent settlements as farmers. It was also found that those civilisations which did adopt agriculture, the relative stability and increased productivity provided by farming allowed the population to expand. Not all societies abandoned nomadism, especially those in the isolated areas of the globe that were poor in domesticable plant species, such as Australia. Agriculture had a major impact; humans began to affect the environment as never before. Surplus food allowed a priestly or governing class to arise, followed by an increasing division of labour which led to Earth’s first civilisation at Sumer in the Middle East, between 4,000BC and 3,000BC. Additional civilisations quickly arose in ancient Egypt, at the Indus River valley and in China. The invention of writing enabled complex societies to arise, record-keeping and libraries served as a storehouse of knowledge and increased the cultural transmission of information. Humans no longer had to spend all their time working for survival, enabling the first specialised occupations, like craftsmen, merchants and priests. Curiosity and education drove the pursuit of knowledge and wisdom, and various disciplines, including science, albeit in a primitive form, arose. This in turn led to the emergence of increasingly larger and more complex civilisations, such as the first empires, which at times traded with one another, or fought for territory and resources. By around 500BC there were more advanced civilisations in the Middle East, Iran, India, China, and Greece, at times expanding, other times entering into decline. In 221BC, China became a single polity, this being an identifiable political entity, a group of people who have a collective identity and who are organised by some form of institutionalised social relations, having the capacity to mobilise resources. They would grow to spread its culture throughout East Asia and it has remained the most populous nation in the world. During this period, famous Hindu texts known as Vedas came in existence in Indus Valley civilisation. They developed in warfare, arts, science, mathematics as well as in architecture. The fundamentals of Western civilisation were largely shaped in Ancient Greece, with the world’s first democratic government and major advances in philosophy as well as science. Ancient Rome grew with law, government, and engineering and then the Roman Empire was Christianised by Emperor Constantine in the early 4th century but then the Roman Empire declined by the end of the 5th century. Beginning with the 7th century, the Christianisation of Europe began. In 610AD Islam was founded and quickly became the dominant religion in Western Asia. The ‘House of Wisdom’ was established in the Abbasid era of Baghdad and Iraq. It is considered to have been a major intellectual centre during the Islamic Golden Age, where Muslim scholars in Baghdad as well as Cairo flourished from the ninth to the thirteenth centuries until the Mongol sack of Baghdad in 1258AD. Meanwhile in 1054AD the Great Schism between the Roman Catholic Church and the Eastern Orthodox Church led to the prominent cultural differences between Western and Eastern Europe. In the 14th century, the Renaissance began in Italy with advances in religion, art, and science. At that time the Christian Church as a political entity lost much of its power. In 1492AD, Christopher Columbus reached the Americas, thus initiating great changes to the New World. European civilisation began to change beginning in 1500AD, leading to both the Scientific and Industrial revolutions. The European continent began to exert political and cultural dominance over human societies around the world, a time known as the Colonial era. Then in the 18th century a cultural movement known as the Age of Enlightenment further shaped the mentality of Europe and contributed to its secularisation. From 1914 to 1918 and 1939 to 1945, nations around the world were embroiled in World Wars. Following World War I, the League of Nations was a first step in establishing international institutions to settle disputes peacefully. After failing to prevent World War II, mankind’s bloodiest conflict, it was replaced by the United Nations and after that war, many new states were formed, declaring or being granted independence in a period of decolonisation.The democratic capitalist United States and the socialist Soviet Union became the world’s dominant super-powers for a time and they held an ideological, often violent rivalry known as the Cold War until the dissolution of the latter. In 1992, several European nations joined in the European Union and as transportation and communication has improved, both the economies and political affairs of nations around the world have become increasingly intertwined. However, this globalisation has often produced both conflict and cooperation. As we continue in this beautiful world though, we are at present having to cope with a world-wide pandemic for which no cure has yet been found. We are researching and looking for vaccines that it is said will at least reduce the adverse effects of Covid-19, however many do not believe that these same vaccines are what we need. As a result, a great many deaths are still being reported in countries right across our world. Some say it is a man-made virus, others are suggesting conspiracy theories, but I feel sure that just as in the past other viruses have been beaten, this one will also be. However, in the meantime we should surely behave responsibly and work together to help reduce the spread of this virus, no matter what our thoughts, ideas or beliefs may be. So that in years to come, others may then look back and learn, in order for all life on Earth to continue.

This week, a simple quote…

“The purpose of life is a life of purpose.”
~ Robert Byrne (22 May 1930 – 06 December 2016)

Click: Return to top of page or Index page

Looking Back

I try not to spend too much time looking back on my life, but there are times when it is good to do so and I have been reminded of a blog post I sent out early last year. A few years ago now a good friend sent me an article about a daughter learning about Darwin’s Theory of Evolution and then her mother telling her about the Sanskrit Avatars, which tell their version on the beginning of life here on Earth. I appreciated that, but to me there are other people, for example the Aborigines, also the American Indians who all have their traditions. Whatever way is right, however things occurred, I really do believe that this world, along with the rest of the Universe, didn’t just happen by accident. With looming discrepancies about the true age of the universe, scientists have taken a fresh look at the observable, expanding universe and have now estimated that it is almost 14 billion years old, plus or minus 40 million years. Considering that as well as our sun, our star, there are around 100,000 million stars in just the Milky Way alone, our Earth is a bit small! As well as that, there are an estimated 500 billion galaxies. With all the fighting and killing that we humans have done in the (extremely relatively) short time that we have been around, it is perhaps a good thing that the nearest star system to our sun is Alpha Centauri, which is 4.3 light-years away. That’s about 25 trillion miles (40 trillion kilometres) away from Earth – nearly 300,000 times the distance from the Earth to the Sun. In time, about 5 billion years from now, our sun will run out of hydrogen. Our star is currently in the most stable phase of its life cycle and has been since the birth of our solar system, about 4.5 billion years ago and once all the hydrogen gets used up, the sun will grow out of this stable phase. But what about our Earth. In my blog post last week I said about what might happen if we were to take an imaginary quick jaunt through our solar system in the potential ‘last days’ of the sun. There has been speculation, there have been films, tv series, all giving a view on how things were or might be. The film ‘2001 A Space Odyssey’ is just one example. Others films like Star Trek have imagined where beings from other worlds colonised Earth, some have considered life if another race were to change life completely here. A favourite of mine, Stargate, started out as a film and then became a series where the Egyptian ruler, Ra, travelled via a star-gate to a far-distant world where earth-like creatures lived. We can speculate and wonder, it is a thing that we humans can do. Though if you know of the late, great Douglas Adams and his tales, we should not panic. Just remember that in his writings, at one point the Earth was destroyed to make way for a hyperspace bypass and that just before that happened, the dolphins left Earth and as they did so, they sent a message saying “So long, and thanks for all the fish”. But I digress. I cannot possibly detail the full history of our Earth here, but I can perhaps highlight a few areas and I shall do my best.

Many attempts have been made over the years to comprehend the main events of Earth’s past, characterised by constant change and biological evolution. There is now a geological time scale, as defined by international convention, which depicts the large spans of time from the beginning of the Earth to the present and its divisions chronicle some definitive events of Earth history. Earth formed around 4.54 billion years ago, approximately one-third the age of the Universe, by accretion – this being the growth or increase by the gradual accumulation of additional layers or matter. It came from the solar nebula. Volcanic outgassing probably created the primordial atmosphere and then the ocean, but the early atmosphere contained almost no oxygen. Much of the Earth was molten because of frequent collisions with other bodies which led to extreme volcanism. Whilst the Earth was in its earliest stage, a giant impact collision with a planet-sized body named Theiais is thought to have formed the Moon. Over time, the Earth cooled, causing the formation of a solid crust, and allowing liquid water on the surface. The Hadean eon represents the time before a reliable fossil record of life, it began with the formation of the planet and ended 4 billion years ago. The following Archean and Proterozoic eons produced the beginnings of life on Earth and its earliest evolution. The succeeding eon was divided into three eras, which brought forth arthropods, fishes, and the first life on land, the next which spanned the rise, reign, and climactic extinction of the non-avian dinosaurs and the following one which saw the rise of mammals. Recognisable humans emerged at most 2 million years ago, a vanishingly small period on the geological scale. The earliest undisputed evidence of life on Earth dates from at least 3.5 billion years ago after which a geological crust started to solidify. There are microbial mat fossils found in 3.48 billion-year-old sandstone discovered in Western Australia and other early physical evidence of a biogenic substance is graphite found in 3.7 billion-year-old rocks discovered in southwestern Greenland. Photosynthetic organisms appeared between 3.2 and 2.4 billion years ago and began enriching the atmosphere with oxygen. Life remained mostly small and microscopic until about 580 million years ago, when complex multicellular life arose. This developed over time and culminated in the Cambrian Explosion about 541 million years ago. This sudden diversification of life forms produced most of the major algae, fungi, and plants known today, and divided the Proterozoic eon from the Cambrian Period of the Paleozoic Era. It is estimated that 99 percent of all species that ever lived on Earth, over five billion have become extinct and estimates on the number of Earth’s current species range from 10 million to 14 million, of which about 1.2 million are documented, but over 86 percent have not been described. However, it was recently claimed that 1 trillion species currently live on Earth, with only one-thousandth of one percent described. The Earth’s crust has constantly changed since its formation, as has life since its first appearance. Species continue to evolve, taking on new forms, splitting into daughter species, or going extinct in the face of ever-changing physical environments. The process of plate tectonics continues to shape the Earth’s continents and oceans and the life which they harbour.

So the history of Earth is divided into four great eons, starting with the formation of the planet. Each eon saw the most significant changes in Earth’s composition, climate and life. Each eon is subsequently divided into eras, which in turn are divided into periods and which are further divided into epochs. In the Hadean eon, the Earth was formed out of debris around the solar protoplanetary disk. There was no life, temperatures were extremely hot, with frequent volcanic activity and hellish-looking environments, hence the eon’s name which comes from Hades. Possible early oceans or bodies of liquid water appeared and the Moon was formed around this time, probably due to a collision into Earth by a protoplanet. In the next eon came the first form of life, with some continents existing and an atmosphere is composing of volcanic and greenhouse gases. Following this came early life of a more complex form, including some forms of multicellular organisms. Bacteria began producing oxygen, shaping the third and current of Earth’s atmospheres. Plants, later animals and possibly earlier forms of fungi formed around that time. The early and late phases of this eon may have undergone a few ’Snowball Earth’, periods, in which all of the planet suffered below-zero temperatures. A few early continents may have existed in this eon. Finally complex life, including vertebrates, begin to dominate the Earth’s ocean in a process known as the Cambrian Explosion. Supercontinents formed but later dissolved into the current continents. Gradually life expanded to land and more familiar forms of plants, animals and fungi began to appear, including insects and reptiles. Several mass extinctions occurred though, amongst which birds, the descendants of non-avian dinosaurs, and more recently mammals emerged. Modern animals, including humans, evolved at the most recent phases of this eon.

An artist’s rendering of a protoplanetary disk.

The standard model for the formation of our Solar System, including the Earth, is the Solar Nebula hypothesis. In this model, the Solar System formed from a large, rotating cloud of interstellar dust and gas, composed of hydrogen and helium created shortly after the Big Bang some 13.8 billion years ago and heavier elements ejected by supernovae. At about 4.5 billion years the nebula began a contraction that may have been triggered by a shock wave from a nearby supernova, which would have also made the nebula rotate. As the cloud began to accelerate, its angular momentum, gravity and inertia flattened it into a protoplanetary disk that was perpendicular to its axis of rotation. Small perturbations due to the collisions and the angular momentum of other large debris created the means by which kilometre-sized protoplanets began to form, orbiting the nebular centre. Not having much angular momentum it collapsed rapidly, the compression heating it until the nuclear fusion of hydrogen into helium began. After more contraction, a ’T Tauri’ star ignited and evolved into the Sun. Meanwhile, in the outer part of the nebula gravity caused matter to condense around density perturbations and dust particles, and the rest of the protoplanetary disk began separating into rings. In a process known as runaway accretion, successively larger fragments of dust and debris clumped together to form planets. Earth formed in this manner about 4.54 billion years ago and was largely completed within 10 to 20 million years. The solar wind of the newly formed T Tauri star cleared out most of the material in the disk that had not already condensed into larger bodies. The same process is expected to produce other accretion disks around virtually all newly forming stars in the universe, some of which yield planets. Then the proto-Earth grew until its interior was hot enough to melt the heavy metals and having higher densities than silicates, these metals sank. This so-called ‘iron catastrophe’ resulted in the separation of a primitive mantle and a metallic core only 10 million years after the Earth began to form, producing the layered structure of Earth and setting up the formation of its magnetic field.

This Earth is often described as having had three atmospheres. The first atmosphere, captured from the solar nebula, was composed of lighter elements from the solar nebula, mostly hydrogen and helium. A combination of the solar wind and Earth’s heat would have driven off this atmosphere, as a result of which the atmosphere was depleted of these elements compared to cosmic abundances. After the impact which created the Moon, the molten Earth released volatile gases; and later more gases were released by volcanoes, completing a second atmosphere rich in greenhouse gases but poor in oxygen. Finally, the third atmosphere, rich in oxygen, emerged when bacteria began to produce oxygen. The new atmosphere probably contained water vapour, carbon dioxide, nitrogen, and smaller amounts of other gases. Water must have been supplied by meteorites from the outer asteroid belt also some large planetary embryos and comets may have contributed. Though most comets are today in orbits farther away from the Sun than Neptune, some computer simulations show that they were originally far more common in the inner parts of the Solar System. As the Earth cooled, clouds formed, rain created the oceans and recent evidence suggests the oceans may have begun forming quite early. At the start of the Archean eon, they already covered much of the Earth. This early formation has been difficult to explain because of a problem known as the ‘faint young sun’ paradox. Stars are known to get brighter as they age, and at the time of its formation the Sun would have been emitting only 70% of its current power. Thus, the Sun has become 30% brighter in the last 4.5 billion years. Many models indicate that the Earth would have been covered in ice and a likely solution is that there was enough carbon dioxide and methane to produce a greenhouse effect. The carbon dioxide would have been produced by volcanoes and the methane by early microbes whilst another greenhouse gas, ammonia, would have been ejected by volcanos but quickly destroyed by ultraviolet radiation. One of the reasons for interest in the early atmosphere and ocean is that they form the conditions under which life first arose. There are many models, but little consensus, on how life emerged from non-living chemicals; chemical systems created in the laboratory fall well short of the minimum complexity for a living organism. The first step in the emergence of life may have been chemical reactions that produced many of the simpler organic compounds, including nuclei and amino acids that are the building blocks of life. An experiment in 1953 by Stanley Miller and Harold Urey showed that such molecules could form in an atmosphere of water, methane, ammonia and hydrogen with the aid of sparks to mimic the effect of lightning. Although atmospheric composition was probably different from that used by Miller and Urey, later experiments with more realistic compositions also managed to synthesise organic molecules. Additional complexity could have been reached from at least three possible starting points, these being self-replication, an organism’s ability to produce offspring that are similar to itself, metabolism, its ability to feed and repair itself and external cell membranes, which allow food to enter and waste products to leave, but exclude unwanted substances. The earliest cells absorbed energy and food from the surrounding environment. They used fermentation, the breakdown of more complex compounds into less complex compounds with less energy, and used the energy so liberated to grow and reproduce. Fermentation can only occur in an oxygen-free) environment. The evolution of photosynthesis made it possible for cells to derive energy from the Sun. Most of the life that covers the surface of the Earth depends directly or indirectly on photosynthesis. The most common form, oxygenic photosynthesis, turns carbon dioxide, water, and sunlight into food. It captures the energy of sunlight in energy-rich molecules, which then provide the energy to make sugars. To supply the electrons in the circuit, hydrogen is stripped from water, leaving oxygen as a waste product. Some organisms, including purple bacteria and green sulphur bacteria, use an an oxygenic form of photosynthesis that uses alternatives to hydrogen stripped from water as electron donors, such as hydrogen sulphide, sulphur and iron. Such organisms are restricted to otherwise inhospitable environments like hot springs and hydrothermal vents. The simpler form arose not long after the appearance of life. At first, the released oxygen was bound up with limestone, iron and other minerals. The oxidised iron appears as red layers in geological strata which are called banded iron formations. When most of the exposed readily reacting minerals were oxidised, oxygen finally began to accumulate in the atmosphere. Though each cell only produced a minute amount of oxygen, the combined metabolism of many cells over a vast time transformed Earth’s atmosphere to its current state. This was Earth’s third atmosphere. Some oxygen was stimulated by solar ultraviolet radiation to form ozone, which collected in a layer near the upper part of the atmosphere. The ozone layer absorbed, and still absorbs, a significant amount of the ultraviolet radiation that once had passed through the atmosphere. It allowed cells to colonise the surface of the ocean and eventually the land. Without the ozone layer, ultraviolet radiation bombarding land and sea would have caused unsustainable levels of mutation in exposed cells. Photosynthesis had another major impact. Oxygen was toxic; much life on Earth probably died out as its levels rose in what is known as the oxygen catastrophe. Resistant forms survived and thrived, and some developed the ability to use oxygen to increase their metabolism and obtain more energy from the same food. The Sun’s natural evolution has made it progressively more luminous during the Archean and Proterozoic eons and the Sun’s luminosity increases 6% every billion years. As a result, the Earth began to receive more heat from the Sun in the Proterozoic eon. However, the Earth did not get warmer. Instead, geological records suggest that it cooled dramatically during the early Proterozoic. Glacial deposits found in South Africa based on paleo-magnetic evidence suggest they must have been located near the equator. Thus, this glaciation, known as the Huronian glaciation, may have been global. Some scientists suggest this was so severe that the Earth was frozen over from the poles to the equator, a hypothesis called Snowball Earth. The Huronian ice age might have been caused by the increased oxygen concentration in the atmosphere, which caused the decrease of methane (CH4) in the atmosphere. Methane is a strong greenhouse gas, but with oxygen it reacts to form CO2, a less effective greenhouse gas. When free oxygen became available in the atmosphere, the concentration of methane could have then decreased dramatically, enough to counter the effect of the increasing heat flow from the Sun. However, the term Snowball Earth is more commonly used to describe later extreme ice ages during the Cryogenian period. There were four periods, each lasting about 10 million years, between 750 and 580 million years ago, when the earth is thought to have been covered with ice apart from the highest mountains, and average temperatures were about −50°C (−58°F). The snowball may have been partly due to the location of the supercontinent straddling the Equator. Carbon dioxide combines with rain to weather rocks to form carbonic acid, which is then washed out to sea, thus extracting the greenhouse gas from the atmosphere. When the continents are near the poles, the advance of ice covers the rocks, slowing the reduction in carbon dioxide, but in the Cryogenian the weathering of that supercontinent was able to continue unchecked until the ice advanced to the tropics. The process may have finally been reversed by the emission of carbon dioxide from volcanoes or the destabilisation of methane gas.

Astronaut Bruce McCandless II outside of the Space Shuttle Challenger in 1984.

I think that sets the basic scene for the Earth itself, but there is still much to write about in terms of colonisation of land, extinctions and human evolution. But I think this is more than enough for now. Change has continued at a rapid pace and along with technological developments such as nuclear weapons, computers, genetic engineering and nanotechnology there has been economic globalisation, spurred by advances in communication and transportation technology which has influenced everyday life in many parts of the world. Cultural and institutional forms such as democracy, capitalism and environmentalism have increased influence. Major concerns and problems such as disease, war, poverty and violent radicalism along with more recent, human-caused climate-change have risen as the world population increases. In 1957, the Soviet Union launched the first artificial satellite into orbit and, soon afterwards Yuri Gagarin became the first human in space. The American, Neil Armstrong, was the first to set foot on another astronomical object, the Moon. Unmanned probes have been sent to all the known planets in the Solar System, with some, such as the two Voyager spacecraft having left the Solar System. Five space agencies, representing over fifteen countries, have worked together to build the International Space Station. Aboard it, there has been a continuous human presence in space since 2000. The World Wide Web became a part of everyday life in the 1990s, and since then has become an indispensable source of information in the developed world. I have no doubt that there will be much more to find, learn, discover and develop.

This week, as we begin a new year…
When attempting to remember the order of planets in our Solar System, I have found they can be remembered by:
‘My Very Educated Mother Just Served Us Nachos’


(Pluto was first discovered in 1930 and described as a planet located beyond Neptune, but following improvements in technology, in 2006 it was then reclassified as a ‘dwarf planet’ in 2006.)

Click: Return to top of page or Index page