Easter is a Christian festival as well as a cultural holiday commemorating the resurrection of Jesus from the dead, as described in the New Testament of the Bible and having occurred on the third day of his burial following his crucifixion by the Romans at Calvary c. 30 AD. It is the culmination of the Passion of Jesus, preceded by Lent, a forty-day period of fasting, prayer and penance. Christians refer to the week before Easter as ‘Holy Week’, which in Western Christianity contains the days of the Easter Triduum, or the period of three days that begins with the liturgy on the evening of Maundy Thursday, reaches its high point in the Easter Vigil and closes with evening prayer on Easter Sunday. It is a moveable observance recalling the Passion, crucifixion, death, burial and resurrection of Jesus as portrayed in the canonical gospels. In Eastern Christianity, the same days and events are commemorated with the names of days all starting with “Holy” or “Holy and Great”; and Easter itself might be called “Great and Holy Pascha”, “Easter Sunday”, “Pascha” or “Sunday of Pascha”. In Western Christianity Eastertide, or the Easter Season, begins on Easter Sunday and lasts seven weeks, ending with the coming of the 50th day, Pentecost Sunday. In Eastern Christianity the Paschal season ends with Pentecost as well, but the leave-taking of the Great Feast of Pascha is on the 39th day, the day before the Feast of the Ascension. Easter and its related holidays are movable feasts, not falling on a fixed date but computed based on a lunar calendar, the solar year plus the Moon phase, similar to the Hebrew calendar. The first council of Nicaea, a council of Christian bishops, was convened in the Bithynian city of Nicaea (now Iznik, Turkey) by the Roman Emperor Constantine in 325AD and was the first effort to attain consensus in the church through an assembly representing all of Christendom. Its main accomplishments were settlement of the Christological issue of the divine nature of God the Son and his relationship to God the Father, the construction of the first part of the Nicene Creed, mandating uniform observance of the date of Easter and promulgation of early canon law. No details for the computation were specified, these were worked out in practice, a process that took centuries and generated a number of controversies. It has come to be the first Sunday after the ecclesiastical full moon that occurs on or soonest after March 21st. Even if calculated on the basis of the more accurate Gregorian calendar, the date of that full moon sometimes differs from that of the astronomical first full moon after the March equinox. Easter is linked to the Jewish Passover by its name, as pesach and pascha are the basis of the term by its origin (according to the synoptic gospels) where both the crucifixion and the resurrection took place during the Passover and by much of its symbolism, as well as by its position in the calendar. In most European languages the feast is called by the words for passover in those languages and in the older English versions of the Bible the term Easter was the term used to translate passover. Easter customs vary across the Christian world and include sunrise services, midnight vigils, exclamations and exchanges of Paschal greetings and one I had never heard of before, called ‘clipping the church’. I have learned that this is an ancient custom traditionally held only in England on Easter Monday, Shrove Tuesday or a date relevant to the saint associated with the church. The word “clipping” is Anglo-Saxon in origin and is derived from the word ‘clyppan’, meaning ‘embrace’ or ‘clasp’. So ‘clipping the church’ involves either the church congregation or local children holding hands in an inward-facing ring around the church, and can then be reversed to an outward-facing ring if a prayer for the wider world beyond the parish is said. Once the circle is completed, onlookers will often cheer and sometimes hymns are sung. Often there is dancing and after the ceremony a sermon is delivered in the church, then there are sometimes refreshments. Christians adopted this tradition to show their love for their church and the surrounding people, but currently there are only a few churches left in England that hold this ceremony, and all of these appear to honour it on a different day. Other customs include the decoration and the communal breaking of Easter eggs, a symbol of the empty tomb. The Easter lily, a symbol of the resurrection in Western Christianity, traditionally decorates the chancel area of churches Easter Day and for the rest of Eastertide. Additional customs that have become associated with Easter and are observed by both Christians and some non-Christians include Easter parades, communal dancing (in Eastern Europe), the Easter Bunny and egg hunting. There are also traditional Easter foods that vary by region and culture.

The modern English term ‘Easter’, with modern Dutch ‘ooster’ and German ‘Ostern’, developed from an Old English word that usually appears in the form ‘Ēastrun’, but also as ‘Ēostre’. Bede provides the only documentary source for the etymology of the word, in his eighth-century ‘The reckoning of Time’. He wrote that ‘Ēosturmōnaþ’ (Old English ‘Month of Ēostre’, translated in Bede’s time as ‘Paschal month’) was an English month, corresponding to April, which he says “was once called after a goddess of theirs named Ēostre, in whose honour feasts were celebrated in that month”. In Latin and Greek, the Christian celebration was, and still is, called ‘Pascha’, a word derived from Aramaic to Hebrew. The word originally denoted the Jewish festival known in English as Passover, commemorating the Jewish exodus from slavery in Egypt. The supernatural resurrection of Jesus from the dead, which Easter celebrates, is one of the chief tenets of the Christian faith. The resurrection established Jesus as the Son of God and is cited as proof that God will righteously judge the world, for those who trust in Jesus’s death and resurrection, “death is swallowed up in victory.” Any person who chooses to follow Jesus receives “a new birth into a living hope through the resurrection of Jesus Christ from the dead”. Through faith in the working of God, those who follow Jesus are spiritually resurrected with Him so that they may walk in a new way of life and receive eternal salvation, being resurrected to dwell in the Kingdom of Heaven. Easter is linked to the Passover and the exodus from Egypt as recorded in the Old Testament of the bible, through the Last Supper, the sufferings and subsequent crucifixion that preceded the resurrection. According to the three Synoptic gospels, Jesus gave the Passover meal a new meaning, as in the upper room during the Last Supper he prepared himself and his disciples for his death. He identified the bread and cup of wine as his body, soon to be sacrificed, and his blood, soon to be shed. Paul the apostle states, “Get rid of the old yeast that you may be a new batch without yeast, as you really are. For Christ, our Passover lamb, has been sacrificed”. This refers to the Passover requirement to have no yeast in the house and to the allegory of Jesus as the Paschal lamb.

In early Christianity, the first Christians were certainly aware of the Hebrew calendar. Jewish Christians, the first to celebrate the resurrection of Jesus, timed the observance in relation to Passover. Direct evidence for a more fully formed Christian festival of Pascha (Easter) begins to appear in the mid-2nd century but perhaps the earliest surviving primary source referring to Easter is a mid-2nd-century Paschal homily attributed to Melito of Sardis (the bishop of Sardis, near Smyrna in western Anatolia and a great authority in early Christianity) which characterises the celebration as a well-established one. Evidence for another kind of annually recurring Christian festival, those commemorating the martyrs, began to appear at about the same time. While martyrs’ days (usually the individual dates of martyrdom) were celebrated on fixed dates in the local solar calendar, the date of Easter was fixed by means of the local Jewish lunisolar calendar. This is consistent with the celebration of Easter having entered Christianity during its earliest, Jewish period, but does not leave the question free of doubt.

A stained-glass window depicting the Passover Lamb.

Easter and the holidays that are related to it are moveable feasts in that they do not fall on a fixed date in either the Gregorian or Julian calendars (both of which follow the cycle of the sun and the seasons). Instead, the date for Easter is determined on what is known as a lunisolar calendar similar to the Hebrew calendar. In 325AD the First Council of Nicaea established two rules, the independence of the Jewish calendar and worldwide uniformity, which were the only rules for Easter explicitly laid down by the council. No details for the computation were specified, these were worked out in practice, a process that took centuries and generated a number of controversies. In particular, the Council did not decree that Easter must fall on Sunday, but this was already the practice almost everywhere. In Western Christianity, using the Gregorian calendar, Easter always falls on a Sunday between 22 March and 25 April, within about seven days after the astronomical full moon. The following day, Easter Monday, is a legal holiday in many countries with predominantly Christian traditions. Eastern Orthodox Christians base Paschal date calculations on the Julian calendar. Because of the thirteen-day difference between the calendars between 1900 and 2099, 21 March corresponds, during the 21st century, to 3 April in the Gregorian calendar. Since the Julian calendar is no longer used as the civil calendar of the countries where Eastern Christian traditions predominate, Easter varies between 4 April and 8 May in the Gregorian calendar. Also, because the Julian ‘full moon’ is always several days after the astronomical full moon, the eastern Easter is often later, relative to the visible Moon’s phases, than western Easter. Amongst the Oriental Orthodox, some churches have changed from the Julian to the Gregorian calendar and the date for Easter, as for other fixed and moveable feasts, is the same as in the Western church. The Gregorian calculation of Easter was actually based on a method devised by a doctor from the Calabria region in Italy using the phases of the Moon and has been adopted by almost all Western Christians and by Western countries which celebrate national holidays at Easter. For the British Empire and colonies, a determination of the date of Easter Sunday using Golden Numbers and Sunday Letters was defined by the 1750 Calendar (New Style) Act with its annexe. This was designed to match exactly the Gregorian calculation.

Receiving the Holy Light at Easter.
St. George Greek Orthodox Church, Adelaide, Australia.

The above image shows the congregation lighting their candles from the new flame, just as the priest has retrieved it from the altar – note that the picture is illuminated by flash, as all electric lighting is off and only the oil lamps in front of the Iconostasis remain lit. In the 20th century, some individuals and institutions put forward changing the method of calculating the date for Easter, the most prominent proposal being the Sunday after the second Saturday in April. Despite having some support, proposals to reform the date have not been implemented. An Orthodox congress of Eastern Orthodox bishops, which included representatives mostly from the Patriarch of Constantinople and the Serbian Patriarch, met in Constantinople in 1923 where the bishops agreed to the revised Julian calendar. The original form of this calendar would have determined Easter using precise astronomical calculations based on the meridian of Jerusalem, however all the Eastern Orthodox countries that subsequently adopted the Revised Julian calendar adopted only that part of it that applied to festivals falling on fixed dates in the Julian calendar. The revised Easter computation that had been part of the original 1923 agreement was never permanently implemented in any Orthodox diocese. Here in the United Kingdom, the Easter Act of 1928 set out legislation to change the date of Easter to be the first Sunday after the second Saturday in April (or, in other words, the Sunday in the period from 9 to 15 April). However, the legislation has not been implemented, although it remains on the Statute book and could be implemented subject to approval by the various Christian churches. At a summit in Aleppo, Syria in 1997 the World Council of Churches (WCC) proposed a reform in the calculation of Easter which would have replaced the present divergent practices of calculating Easter with modern scientific knowledge taking into account actual astronomical instances of the spring equinox and full moon based on the meridian of Jerusalem, while also following the tradition of Easter being on the Sunday following the full moon. The recommended World Council of Churches changes would have sidestepped the calendar issues and eliminated the difference in date between the Eastern and Western churches. The reform was proposed for implementation starting in 2001, and despite repeated calls for reform, it was not ultimately adopted by any member body. In January 2016, Christian churches again considered agreeing on a common, universal date for Easter, whilst also simplifying the calculation of that date, with either the second or third Sunday in April being popular choices. So far, no date has yet been agreed.

Easter is seen by many as the state of new life, of rebirth and as one might expect, the egg is one such symbol. In Christianity it became associated with Jesus’s crucifixion and resurrection and the custom of the Easter egg originated in the early Christian community of Mesopotamia, who stained eggs red in memory of the blood of Christ, shed at his crucifixion. As such, for Christians, the Easter egg is a symbol of the empty tomb. The oldest tradition is to use dyed chicken eggs. In the Eastern Orthodox Church, Easter eggs are blessed by a priest both in families’ baskets together with other foods forbidden during Great Lent and alone for distribution or in church or elsewhere.

Traditional red Easter eggs for blessing by a priest.

Easter eggs are a widely popular symbol of new life among the Eastern Orthodox and the folk traditions of many Slavic countries. I have learned of a decorating process known as ‘pisanka’, a common name for an egg (usually that of a chicken, although goose or duck eggs are also used) richly ornamented using various techniques. The word ‘pisanka’ is derived from the verb ‘pisać’ which in contemporary Polish means exclusively ‘to write’ yet in old Polish meant also ‘to paint’. Originating as a pagan tradition, pisanki was absorbed by Christianity to become the traditional Easter egg and Pisanki are now considered to symbolise the revival of nature and the hope that Christians gain from faith in the resurrection of Jesus Christ. The celebrated House of Fabergé workshops created exquisitely jewelled Easter eggs for the Russian Imperial family from 1885 to 1916. A modern custom in the Western world is to substitute decorated chocolate filled with sweets. As many people give up these as their Lenten sacrifice, individuals enjoy these at Easter after having abstained from them during the preceding forty days of Lent.

Easter eggs, a symbol of the empty tomb.

Manufacturing their first Easter egg in 1875, the British chocolate company Cadbury sponsors the annual Easter egg hunt which takes place in over two hundred and fifty National Trust locations here in the United Kingdom. On Easter Monday, the President of the United States holds an annual Easter egg roll on the White House lawn for young children. In some traditions children put out empty baskets for the Easter bunny to fill whilst they sleep, they wake to find their baskets filled with chocolate eggs and other treats. Many children around the world follow the tradition of colouring hard-boiled eggs and giving baskets of sweets. One fascinating fact to me though is that since the rabbit is considered a pest in Australia, the Easter Bilby is used as an alternative. Bilbies are native Australian marsupials who are an endangered species, so to raise money and increase awareness of conservation efforts Bilby-shaped chocolates and related merchandise are sold within many stores throughout Australia as an alternative to Easter bunnies. But this time should surely be remembered as a new beginning, as it has been for centuries throughout the world. Happy Easter!

This week…
Not everyone has a home computer these days, but more and more folk find them useful as part of doing research on a range of subjects. Happily most public libraries allow folk free access to the ones they have, but time is strictly limited and use must be allocated. Sadly I can never get in to my local library, as every time I phone up they tell me they are fully ‘booked’…

Click: Return to top of page or Index page

Time Team

Many years ago I was looking through tv channels and chanced upon a show called ’Time Team’. The name was intriguing, so I sat down and watched. It fascinated me. I continued watching and I am glad I did. But sadly, after quite a few years, the tv series ended so I was delighted to see a mention of it again recently. As is my way, I did some research online and found that a fair bit had been written, especially recently and the following is what I found. I discovered some excellent images and some information on digs that were done last year as well as work expected in what I hope will be quite soon this year now. In fact ‘Time Team’ is a well-rehearsed story, but what I didn’t know was that it started as ‘Timesigns’, a four-part series which first aired in 1991. Roadford Lake, also known as the Roadford Reservoir is actually a man-made reservoir fed by the River Wolf, located to the north-east of Broadwoodwidger in West Devon, eight miles (13 km) east of Launceston. I do like the delightful village names we have in this country! This place is quite small and according to the 2001 census it had a population of just 548. Also, the reservoir is the largest area of fresh water in the southwest of England. Exploring the archaeology of this area came about after Tim Taylor approached Mick Aston to present the series and as a result, along with Phil Harding, three members of the future Time Team core were now in place. Yet despite bringing the past to life using the ingredients of excavation, landscape survey and reconstructions, including Phil felling a tree with a flint axe, Timesigns was a very different beast. In fact the four-part series is still available to watch online at https://www.channel4.com/programmes/timesigns and watching it now provides a lesson in just how revolutionary the Time Team format actually was. That is because Timesigns was slower paced and it had Mick talking directly to the camera in a style more akin to a history documentary or Open University broadcast, also there was a focus on interesting, previously discovered artefacts. It included Phil Harding in woodland, seeking out raw materials for a reconstructed axe and this allowed the audience to witness the hands-on practical process. It meant that viewers were placed at the heart of the action and this would later become a hallmark of Time Team. Whilst filming Timesigns, Tim and Mick often discussed other ways to bring archaeology to a television audience and what later proved to be something of a providential conversation took place in a Little Chef on the Okehampton bypass, where Mick mentioned that he had recently missed a train and, having a couple of hours to kill, decided to explore. During that time he deduced the town’s medieval layout and, struck by how much could be learned in a few hours, Tim wondered what could then be achieved in a few days. When he took this idea to various studios though, no-one wanted to know. Still, it was not the first time that a chance conversation with Mick had started someone thinking about television archaeology as a few years earlier Tony Robinson had joined a trip which Mick was leading to Santorini, a Greek island in the southern Aegean Sea about 200 km (120 miles) southeast from the mainland as this was part of some education work for Bristol University. Mick’s aptitude for breathing life into the past convinced Tony that archaeology had untapped television potential, but when he returned to Britain Tony found the studios unwilling to take the idea further. The breakthrough came when Timesigns proved an unexpected hit. Suddenly Channel 4 was receptive to the idea of a major archaeology programme, Tim Taylor devised the name ‘Time Team’ and in 1992 a pilot episode was filmed in Dorchester-on-Thames. Never screened and reputedly lost in the Channel 4 vaults, this pilot captured a show that was radically different to Timesigns and was initially seen as a quiz show in a similar vein to ‘Challenge Anneka’, where the team would be called on to solve archaeological mysteries whilst racing against the clock. Envelopes hidden at strategic points would set challenges along the lines of ‘find the Medieval high street in two hours’. Judged a misfire by Channel 4, it could have been the end. Thankfully, instead the Time Team’s format was radically overhauled although shades of the quiz-show concept did survive in early episodes. The onscreen introduction of all the team members and their specialist skills was a hangover from a time when participants would have varied from week to week, rather than coalescing into a core group but in the meantime, Tony’s role transformed from a quiz master to translator of all things archaeological for a general audience and the final piece of the jigsaw fell into place during the fledgling Time Team‘s first episode. Filmed at Athelney, site of Alfred the Great’s apocryphal burnt cakes, the site was scheduled, precluding excavation. John Gater, who was the programme’s ‘geophysics’ wizard, surveyed the field. Despite the Ancient Monuments Laboratory having drawn a blank the year before, John’s state-of-the-art kit revealed the monastic complex in startling clarity. Best of all, the cameras were rolling to capture the archaeologists’ euphoria as the geophysical plot emerged from a bulky printer in the back of the survey vehicle.

Mick Aston at work.

As well as an arresting demonstration of the power of teamwork, Athelney showed how geophysics could be the heart of the programme. As Mick Aston observed “the geophys and Time Team have always gone hand in hand. It is the programme really. Geophysics gives you that instant picture you can then evaluate”. John has kept on top of technical advances, and the results of his survey of Brancaster Roman fort provide one of the really outstanding moments in later series, with the breathtaking 3D model it produced of the buried structures persuading English Heritage to commission a complete survey on the spot. The original team brought an impressive breadth of skills to the programme. Victor Ambrus’ peerless ability to bring the past to life on the fly was well displayed after his artwork caught Tim Taylor’s eye in an edition of Readers’ Digest and the late Robin Bush brought a degree of historical expertise that would be missed almost as much as the man himself following his departure in 2003. Despite their varied talents and backgrounds it quickly became apparent that the team had a natural chemistry. Time Team became well-known for their individual ways and styles, including Mick’s famous striped jumper. Requested by a commissioning editor to wear more colourful clothing, Mick turned up in the most garish garment he could find as a joke, only to be told it was perfect. Far from a media concoction, the unique individuals on Time Team were filmed going about their work with an honesty and integrity that has seen the series heralded as Britain’s first reality television show. There can be little doubt that part of the show’s early success stems from the audience warming to the group’s genuine passion for teasing out the past. Rather than targeting the palaces and castles of the rich and famous, each of the episodes sought to solve simple, local questions. This was really highlighted by having a member of the public read out a letter of invitation at the beginning, posing the question they wanted answered. The message was simple, this is local archaeology, it is ‘your’ archaeology. It worked well, especially whenever the director of the first few seasons followed the digs as they evolved and his technique meant that viewers were often placed on the edge of a trench when discoveries happened and making them privy to key discussions. However some archaeologists were initially, quite fairly, a bit sceptical. One aspect that some treated with suspicion was the three-day deadline. Research digs usually ran for weeks if not months, and it was questioned whether anything approaching responsible archaeology could be achieved in such a short space of time. It was certainly not ideally suited to showcase all of the techniques available to modern archaeologists. Much money would be spent on scientific dating, with the results only coming back in time for a line of dialogue to be dubbed on months after filming had concluded. Coincidentally, digging within a tight timeframe was how changes were occurring within the profession. Obliged to cut evaluation trenches to meet the deadlines of multi-million pound construction projects, the 1990s saw a surge in short-term excavation projects. It led to an appreciation of just how much information could be quickly gleaned from comparatively modest trenching. The thrill of time running out also engaged viewers, and Time Team’s popularity was rewarded with increasingly longer series. Season one, aired in 1994, had four episodes, while season two followed with five, and season three then boasted six.

Some members of the Time Team.

Seasons nine to twelve have often been seen as Time Team‘s ‘golden’ age. Screening thirteen episodes a year, as well as live digs and specials the programme seemed to be ever-present. Its stars were household names and at its zenith, Time Team had regular audiences of over three million viewers. Now that the format was safely established, the programme was increasingly able to capitalise on its fame and access big name sites, even Buckingham Palace. Whilst the allure of such sites created a powerful television spectacle, it also marked a move away from the programme’s humble local archaeology origins. Even after its star began to wane, Time Team remained popular and an audience study in 2006 indicated that twenty million people watched at least one show that year. However it was season nineteen that changed everything as in 2011 the production centre for the programme moved from London to Cardiff. Very much of a political gesture aimed at building up regional television, the series was picked because it seemed a safe pair of hands. Sadly it cost the show almost all of its behind the scenes staff, expertise honed over fifteen years was lost at a stroke, to be replaced by crew and production staff who knew neither each other nor archaeology. Despite some great new people who learnt fast, expecting them to produce the same calibre of product immediately was just too big a demand. Time Team‘s cost also made it vulnerable. Towards the end of its run an average episode would cost around £200,000, a budget more on the scale of a small drama show in the eyes of television insiders but over twenty years Channel 4 had in fact pumped £4 million directly into British archaeology. It is to the Channel’s credit that it did this despite much of that outlay being channelled into post-excavation work that never appeared on-screen. The money was well spent and today only five Time Team sites remain unpublished, a record that shames many UK units and academics.

The Time Team in 2012.

Back then, Time Team’s legacy left much to celebrate. It brought the money and expertise to investigate sites that would otherwise never have been touched. The Isle of Mull episode in season seventeen is a great example of what could be discovered. With only some strange earthworks exciting the curiosity of local amateur archaeologists to go on, the programme was flexible enough to be able to take a gamble and the result was a previously unknown 5th-century monastic enclosure linked to St Columba. It enabled a local group to secure Historic Lottery Fund money to dig the site. Time Team excavations at Binchester’s Roman fort also helped kickstart a major research project. I was saddened when the series ended, but in 2021 there was excellent news when, thanks to the overwhelming support of their supporters, the Time Team returned for two brand new digs in September that year, with the episodes due to be released this year on the YouTube channel ‘Time Team Official’. This will give viewers the chance to engage as the shows are researched and developed, see live blogs during filming, watch virtual reality landscape data at home and join in Q&A’s with the team. Carenza Lewis, Stewart Ainsworth, Helen Geake and geophys genius John Gater will all be returning. They are joined by new faces representing the breadth of experts practising archaeology today. Sir Tony Robinson, who is an honorary patron, says: “I was delighted to hear about the plans for the next chapter in Time Team’s story. It’s an opportunity to find new voices and should help launch a new generation of archaeologists. While I won’t be involved in the new sites, I was delighted to accept the role of honorary patron of the Time Team project. It makes me chief super-fan and supporter. All armoury in our shared desire to inspire and stimulate interest in archaeology at all levels.” Like Tony, I too am a great fan of Time Team and feel sure that this will bode well, as there is now a Time Team website at http://www.timeteamdigital.com.

This week…

A Turkish proverb.

Click: Return to top of page or Index page

All Fools’ Day

More commonly known as April Fools’ Day, this is celebrated on April 1st each year and has been celebrated for several centuries by many different cultures, though its exact origins remain a mystery. Traditions include playing hoaxes or practical jokes on others, often ending the event by calling out “April Fool!” to the recipient so they realise they’ve been caught out by the prank. Whilst its exact history is shrouded in mystery, the embrace of April Fools’ Day jokes by the media and major brands has ensured the unofficial holiday’s long life. Mass media can be involved in these pranks, which may then be revealed as such on the day following. The day itself is not a public holiday in any country except Odessa in the Ukraine, where the first of April is an official city holiday. The custom of setting aside a day for playing harmless pranks upon one’s neighbour has become a relatively common one in the world and a disputed association between 1 April and foolishness is in Geoffrey Chaucer’s ‘The Canterbury Tales (1392) as in the ’Nun’s Priest’s Tale’, where a vain person is tricked by a fox with the words ‘Since March began thirty days and two’, i.e. 32 days since March began, which is April 1st. In 1508, French poet Eloy d’Amerval referred to a ‘poisson d’avril’, possibly the first reference to the celebration in France. Prompted by the Protestant Reformation, the Ecumenical Council of the Catholic Church issued condemnations of what it defined to be heresies committed by proponents of Protestantism and also issued key statements and clarifications of the Church’s doctrine and teachings, including scriptures, the Biblical canon, sacred tradition, original sin, the sacraments, Mass and the veneration of saints. The Council met for twenty-five sessions between 13 December 1545 and 4 December 1563 and Pope Paul III, who convoked, or called together the Council, oversaw the first eight sessions during 1545 and 1547, whilst the twelfth to sixteenth sessions, held between 1551 and 1552, were overseen by Pope Julius III and the final seventeenth to twenty-fifth sessions by Pope Pius IV between 1562 and 1563. As a result, the use of January 1st as New Year’s Day was not adopted officially until 1564 by the Edict of Roussillon, when France switched from the Julian to the Gregorian calendar. In the Julian Calendar, like the Hindu calendar, the new year began with the spring equinox around April 1st. So people who were slow to get the news of this change from the Julian to the Gregorian calendar or simply failed to realise the change but continued to celebrate the start of the new year during the last week of March and into April became the butt of jokes and hoaxes and were therefore called “April fools.” These pranks included having paper fish placed on their backs and being referred to as “poisson d’avril” (April fish), said to symbolise a young, easily caught fish or a gullible person. In 1686, a writer named John Aubrey referred to the celebration as ‘Fooles holy day’, the first British reference. On 1 April 1698, several people were tricked into going to the Tower of London to “see the Lions washed”.

An 1857 ticket to “Washing the Lions” at the Tower of London. No such event was ever held.

A study in the 1950s by two folklorists found that in the UK and in countries whose traditions derived from here, the joking ceased at midday and this continues to be the practice, with the custom ceasing at noon, after which time it is no longer acceptable to play pranks. Thus a person playing a prank after midday is considered to be the ‘April fool’ themselves. Meanwhile in Scotland, April Fools’ Day was originally called ‘Huntigowk Day’. The name is actually a corruption of ‘hunt the gowk’, this being Scottish for a cuckoo or a foolish person. Alternative terms in Gaelic would be ‘Là na Gocaireachd’, ‘gowking day’, or ‘Là Ruith na Cuthaige’, ‘the day of running the cuckoo’. The traditional prank is to ask someone to deliver a sealed message that supposedly requests help of some sort. In fact, the message reads “Dinna laugh, dinna smile. Hunt the gowk another mile.” The recipient, upon reading it, will explain they can only help if they first contact another person, and they send the victim to this next person with an identical message, with the same result. In England a ‘fool’ is known by a few different names around the country, including ‘noodle’, ‘gob’, ‘gobby’ or ‘noddy’.

Big Ben going digital…

On April Fools’ Day 1980, the BBC announced the Big Ben’s clock face was going digital and whoever got in touch first could win the clock hands. Over in Ireland, it was traditional to entrust a victim with an “important letter” to be given to a named person. That person would read the letter, then ask the victim to take it to someone else, and so on. The letter when opened contained the words “send the fool further”. A day of pranks is also a centuries-long tradition in Poland, signified by ‘prima aprilis’, this being ‘First April’ in Latin. It is a day when many pranks are played and hoaxes, sometimes very sophisticated, are prepared by people as well as the media (which often cooperate to make the ‘information’ more credible) and even public institutions. Serious activities are usually avoided, and generally every word said on April 1st could be untrue. The conviction for this is so strong that the Polish anti-Turkish alliance with Leopold I which was signed on 1 April 1683, was backdated to 31 March. But for some in Poland ‘prima aprilis’ also ends at noon of 1 April and such jokes after that hour are considered inappropriate and not classy. Over in Nordic countries Danes, Finns, Icelanders, Norwegians and Swedes celebrate April Fools’ Day. It is ‘aprilsnar’ in Danish, ‘aprillipäivä’ in Finnish and ‘aprilskämt’ in Swedish. In these countries, most news media outlets will publish exactly one false story on 1 April and for newspapers this will typically be a first-page article but not the top headline. In Italy, France, Belgium and the French-speaking areas of Switzerland and Canada, the April 1st tradition is similarly known as April fish, being ‘poisson d’avril’ in French, ‘April vis’ in Dutch and ‘pesce d’aprile’ in Italian. Possible pranks include attempting to attach a paper fish to the victim’s back without being noticed. This fish feature is prominently present on many late 19th- to early 20th-century French April Fools’ Day postcards. Many newspapers also spread a false story on April Fish Day, and a subtle reference to a fish is sometimes given as a clue to the fact that it is an April Fools’ prank. In Germany, as in the UK an April Fool prank is sometimes later revealed by shouting “April fool!” at the recipient, who becomes the April fool but over in the Ukraine, April Fools’ Day is widely celebrated in Odessa and has the special local name ‘Humorina’. It seems that this holiday arose in 1973 and an April Fool prank is revealed by saying “Pervoye Aprelya, nikomu ne veryu”, which means “April the First, I trust nobody”, to the recipient. The festival includes a large parade in the city centre, free concerts, street fairs and performances. Festival participants dress up in a variety of costumes and walk around the city fooling around and pranking passersby. One of the traditions on April Fools’ Day is to dress up the main city monument in funny clothes. Humorina even has its own logo, a cheerful sailor in a lifebelt and whose author was the artist Arkady Tsykun. During the festival, special souvenirs bearing the logo are printed and sold everywhere. Quite why or how this began I cannot determine but since 2010, April Fools’ Day celebrations include an International Clown Festival and both are celebrated as one. In 2019, the festival was dedicated to the 100th anniversary of the Odessa Film Studio and all events were held with an emphasis on cinema.

An April Fools’ Day prank in the Public Garden in Boston, Massachusetts.
The sign reads “No Photography Of The Ducklings Permitted”

As well as people playing pranks on one another on April Fools’ Day, elaborate pranks have appeared on radio and television stations, newspapers, and websites as well as those performed by large corporations. In one famous prank in 1957, the BBC broadcast a film in their ‘Panorama’ current affairs series purporting to show Swiss farmers picking freshly-grown spaghetti, in what they called the Swiss spaghetti harvest. The BBC was soon flooded with requests to purchase a spaghetti plant, forcing them to declare the film a hoax on the news the next day. With the advent of the Internet and readily available global news services, April Fools’ pranks can catch and embarrass a wider audience than ever before. But the practice of April Fool pranks and hoaxes is somewhat controversial. The mixed opinions of critics are epitomised in the reception to the 1957 BBC ’spaghetti tree hoax’ and newspapers were later split over whether it was a great joke or a terrible hoax on the public. The positive view is that April Fools’ can be good for one’s health because it encourages ‘jokes, hoaxes, pranks, and belly laughs’ and brings all the benefits of laughter including stress relief and reducing strain on the heart. There are many ‘best of’ April Fools’ Day lists that are compiled in order to showcase the best examples of how the day is celebrated and various April Fools’ campaigns have been praised for their innovation, creativity, writing, and general effort. However, the negative view describes April Fools’ hoaxes as ‘creepy and manipulative, rude and a little bit nasty’, as well as based on ‘Schadenfreude’, the experience of pleasure, joy, or self-satisfaction that comes from learning of or witnessing the troubles, failures, or humiliation of another, as well as deceit. When genuine news or a genuine important order or warning is issued on April Fools’ Day, there is risk that it will be misinterpreted as a joke and ignored, for example when Google (known to play elaborate April Fools’ Day hoaxes) announced, in 2004, their launch of Gmail with one gigabyte inboxes, an era when competing webmail services offered four megabytes or less, many dismissed it as an outright joke. On the other hand, sometimes stories intended as jokes are taken seriously. So either way, there can be adverse effects such as confusion, misinformation, wasted resources (especially when the hoax concerns people in danger) and even legal or commercial consequences. In Thailand, the police even warned ahead of the April Fools’ in 2021 that posting or sharing fake news online could lead to maximum of five years imprisonment. Other examples of genuine news on April 1st mistaken as a hoax included warnings about the Aleutian Island earthquake’s tsunami in Hawaii and Alaska in 1946 that killed 165 people, news on April 1st that a comedian by the name of Mitch Hedberg had died on 29 March 2005, an announcement that a long running USA soap opera called ‘Guiding Light’ was being cancelled in 2009 or that a USA basketball player named Isaiah Thomas had been declared for the NBA draft in 2011, probably because of his age. As well as April 1st being recognised as April Fools’ Day, there are a few other, recognisable days, notably on the first of each month when, in English-speaking countries (mainly Britain, Ireland, Australia, New Zealand and South Africa) it is a custom to say “a pinch and a punch for the first of the month” or a similar alternative, but this is typically said by children. In some places the victim might respond with “a flick and a kick for being so quick”, but that I haven’t heard said for many a long year. I do still say (or share in text messages etc) “White rabbits” as this is meant to bring good luck and to prevent the recipient saying ‘pinch, punch, first of the month’ to you! I do wonder sometimes how one of my older brothers managed at school on this particular day though, as April 1st is his birthday – perhaps he managed to keep it quiet somehow…

This week…
There are so many words in English that seem to have fallen out of use and I am starting to find a few. We know that when a word is used to emphasise or lay emphasis on a noun, it is called an emphatic adjective. Examples are found in “The very idea of living on the moon is impractical” and “They are the only people who helped me, where ‘very’ and only’ emphasise. But there are also ‘phatic’ expressions and these are ones denoting or relating to language used for general purposes of social interaction, rather than to convey information or ask questions. Utterances such as “hello, how are you?” and “nice morning, isn’t it?” are examples of phatic expressions.

Click: Return to top of page or Index page

This Earth

This Earth has been in existence for quite a long while and do I wonder how many folk consider that, also how much this amazing planet has changed over time. We as humans haven’t been here all that long and it is generally believed that as a race, Homo sapiens evolved in Africa during a time of dramatic climate change some 300,000 years ago. Like other early humans that were living around that time we gathered and hunted for food, evolving behaviours that helped us to respond to the challenges of survival in unstable environments. To begin with, we certainly had a few ideas about ourselves and the Earth itself that have proven to be wrong. There have been a number of misconceptions, again now proved to be incorrect, a few of these being as follows. Ancient Greek and Roman sculptures were originally painted with bright colours, but they only appear white today because the original pigments have deteriorated. Some well-preserved statues still bear traces of their original colouration. Also, the tomb of Tutankhamen is not inscribed with a curse on those who disturb it, this was a media invention of 20th-century tabloid journalists. The ancient Greeks did not use the word ‘idiot’ to disparage people who did not take part in civic life or who did not vote. An idiot was simply a private citizen as opposed to a government official. Later, the word came to mean any sort of non-expert or layman, then someone uneducated or ignorant, and much later to mean stupid or mentally deficient.

Oath of the Horatii by Jacques-Louis David in 1784.

According to ancient Roman legend, the Horatii were triplet warriors who lived during the reign of Tullus Hostilius (r. 672–640 BC). Accounts of his death of vary, as in the mythological version of events he had angered Jupiter, who then killed him with a bolt of lightning. But non-mythological sources describe that he died of a plague after a ruling for 32 years. There is also no evidence of the use of the Roman salute by ancient Romans (as depicted in the above painting) for greeting or any other purpose. The idea that the salute was popular in ancient times originated from the painting but it then inspired later salutes, most notably the Nazi salute. Another idea was that Julius Caesar was born via Caesarean section, but at the time of his birth such a procedure would have been fatal to his mother and Caesar’s mother was still alive when Caesar was 45 years old. The name ‘caesarean’ probably comes from the Latin verb ‘caedere’, meaning ’to cut’. Also there is the myth of the Earth being flat. In fact the earliest clear documentation of the idea of a spherical Earth comes from the ancient Greeks in the 5th century BC. The belief was widespread in Greece when Eratosthenes of Cyrene, a man of learning who lived from around 276BC to 194 BC who was a mathematician, geographer, poet, astronomer and music theorist. He also became the chief librarian at the Library of Alexandria and he introduced some of the terminology still in use today. As a result, most European and Middle Eastern scholars accepted that the Earth was spherical and belief in a flat Earth amongst educated Europeans was almost nonexistent from the Late Middle Ages onward, although fanciful depictions appear in some art. However, by the 1490’s there was still an issue as to the size of the Earth and in particular the position of the east coast of Asia. Historical estimates from Ptolemy, also a mathematician, astronomer, astrologer, geographer and music theorist, placed the coast of Asia about 180° east of the Canary Islands. It was Columbus who adopted an earlier (and rejected) distance of 225°, added 28° (based on Marco Polo’s travels), and then placed Japan a further 30° east. Starting from Cape St Vincent in Portugal, Columbus made Eurasia stretch 283° to the east, leaving the Atlantic as only 77° wide. Since he planned to leave from the Canaries, 9° further west, his trip to Japan would only have to cover 68° of longitude. Columbus mistakenly assumed that the mile referred to in the Arabic estimate of 56⅔ miles for the size of a degree was the same as the actually much shorter Italian mile of 1,480 metres. His estimate for the size of the degree and for the circumference of the Earth was therefore about 25% too small. The combined effect of these mistakes was that Columbus estimated the distance to Japan to be only about 5,000km, or only to the eastern edge of the Caribbean whilst the true figure is about 20,000km. The Spanish scholars may not have known the exact distance to the east coast of Asia, but they believed that it was significantly further than Columbus’s projection and this was the basis of the criticism in Spain and Portugal, whether academic or among mariners, of the proposed voyage. The disputed point was not the shape of the Earth, nor the idea that going west would eventually lead to Japan and China, but the ability of European ships to sail that far across open seas. The small ships of the day simply could not carry enough food and water to reach Japan as Columbus’s three ships varied in length between 20.5 and 23.5 metres, or 67 to 77 feet, and carried about 90 men. In fact the ships barely reached the eastern Caribbean islands as already the crews were mutinous, not because of some fear of ‘sailing off the edge’, but because they were running out of food and water with no chance of any new supplies within sailing distance. They were on the edge of starvation. What saved Columbus was the unknown existence of the Americas precisely at the point he thought he would reach Japan. His ability to resupply with food and water from the Caribbean islands allowed him to return safely to Europe, otherwise his crews would have died, and the ships foundered. Since the early 20th century, quite a number of books and articles have documented the flat Earth error as one of a number of widespread misconceptions in the popular views of the Middle Ages and although the misconception was frequently refuted in historical scholarship since at least 1920, it persisted in popular culture and in some school textbooks into the 21st century. An American schoolbook by Emma Miller Bolenius published in 1919 has this introduction to the suggested reading for Columbus Day, October 12th: “When Columbus lived, people thought that the Earth was flat. They believed the Atlantic Ocean to be filled with monsters large enough to devour their ships, and with fearful waterfalls over which their frail vessels would plunge to destruction. Columbus had to fight these foolish beliefs in order to get men to sail with him. He felt sure the Earth was round”.

The semi-circular shadow of Earth on the Moon during a partial lunar eclipse.

Pythagoras in the 6th century BC and Parmenides in the 5th century BC stated that the the Earth was spherical and this view spread rapidly in the Greek world. Around 330 BC Aristotle maintained on the basis of physical theory and observational evidence that the Earth was indeed spherical and reported an estimate of its circumference the value was first determined around 240 BC by Eratosthenes. By the 2nd century AD, Ptolemy had derived his maps from a globe and developed the system of latitude, longitude and climes. His Almagest was the Greek-language mathematical and astronomical treatise on the apparent motions of the stars and their planetary paths. One of the most influential scientific texts in history, it canonised a geocentric model of the Universe that was accepted for more than 1,200 years from its origin in Hellenistic, in the medieval Byzantine and Islamic worlds as well as in Western Europe through the Middle Ages and early Renaissance until Copernicus. It is also a key source of information about Ancient Greek astronomy. The work was originally written in Greek and only translated into Latin in the 11th century from Arabic translations. It is fascinating to consider that in the first century BC, Lucretius opposed the concept of a spherical Earth because he considered that an infinite universe had no centre towards which heavy bodies would tend towards. Thus he thought the idea of animals walking around topsy-turvy under the Earth was absurd. By the 1st century AD, Pliny the Elder was in a position to claim that everyone agreed on the spherical shape of Earth, though disputes continued regarding the nature of the antipodes, and how it was possible to keep the oceans in a curved shape.

Thorntonbank Wind Farm near the Belgian coast.

In the above image of Thorntonbank Wind Farm, the lower parts of the more distant towers are increasingly hidden by the horizon, demonstrating the curvature of the Earth. But even in the modern era, the pseudoscientific belief in a flat Earth originated with the English writer Samuel Rowbotham in his 1849 pamphlet ‘Zetetic Astronomy’. Lady Elizabeth Blount established the Universal Zetetic Society in 1893, which published journals. There were other flat-Earthers in the 19th and early 20th centuries and in 1956, Samuel Shenton set up the International Flat Earth Research Society (IFERS), better known as the “Flat Earth Society” from Dover, England, as a direct descendant of the Universal Zetetic Society. In the era of the Internet, the availability of communications technology and social media such as Facebook, YouTube and Twitter these have made it easy for individuals, famous or not, to spread disinformation and attract others to erroneous ideas, including that of the flat Earth. I still smile at the advert I once saw which read “Join the Flat Earth Society – branches all around the world”. To maintain belief in the face of overwhelming contrary, publicly available empirical evidence accumulated in the Space Age, modern flat-Earthers must generally embrace some form of conspiracy theory out of the necessity of explaining why major institutions such as governments, media outlets, schools, scientists, and airlines all assert that the world is a sphere. They tend to not trust observations they have not made themselves, and often distrust or disagree with each other. As so many do over so many things. I think that what can also be difficult to comprehend or imagine is the sheer size of our Earth, our solar system, the Milky Way and beyond. Science has enabled us to see, through microscopes and the like, things which are so tiny that we need devices to perceive them. We do now have telescopes, but even those often use infra-red (which our eyes cannot see naturally) to ‘see’ what is a great distance from our planet. On one of the websites I look at there are often questions raised which are good ones, but equally there are a few which show that the writer seems to have no concept of how large the Universe really is. As an example, one question recently shared was “If telescopes can see billions of light years away, what stops us from seeing detailed images of planet surfaces to check for plants or other life?”. The answer given was that the Andromeda Galaxy is actually about 2.5 million light-years from Earth, but even when we use the Hubble telescope to see the surface of the planet Mars which is only about 0.000042 light-years away, the sharpest image of the surface of Mars is very blurry like the one below. This is because of the relative sizes of the planets. For example the Andromeda galaxy, although incredibly distant, is so large that its relative size, when viewed from Earth, is massive. From here we should understand why distant galaxies can be seen well with a telescope.

The Andromeda Galaxy.
The surface of Mars.

So far as this Earth is concerned, whilst we have generally explored almost the entire continental surface, with the exception of Antarctica that is, there are substantial parts of the ocean that remain unexplored and not fully studied. Even the latest technological advances for mapping the seafloor are limited by what they can do in the oceans. I have mentioned before about a computer app that is freely available and which also utilises a website called What3Words. It divides the world into individual three-metre squares and gives each one a unique three-word address, in order for people to be easily found in emergencies. It also gives people without a formal address access to one for the first time, whether a permanent address or if halfway up the side of a mountain, for example. I think this is especially useful for emergency services to locate people, even in the sea as the UK version includes that. Whether we think this is a good thing or not, it means that everywhere in the world now has an address, even a tent in the middle of a field or a ditch on the North York Moors! The website is https://what3words.com and one example, in this case the entrance to Peterborough railway station, is w3w.co/energetic.copies.rope and clicking this link opens a web page showing a map of Peterborough, with the square allocated by what3words to the railway station entrance. There are no duplications. This program is available on Apple and I believe Google, I think it may be of use to folk on such things as countryside walks or simply meeting up with friends.

In previous blog posts I have written a little about this Earth, its language and transportation by road and rail as well as aviation. Each have their individual benefits and we have certainly come a long way in these things. Sadly however, so many advances have been as a result of wars, with either individuals or groups for some reason wanting to better another. As a result I still struggle to comprehend this human need. Still, it is going on around us and I expect will continue to do so for years to come, long after I am here. Just as those who lived for a time but passed away, so will others. I remember when I was quite young talking to our local vicar after he had talked about heaven and earth and me saying to him how I thought that Heaven must be a very big place, thinking about all the many people who had died over time. I remember the vicar smiling gently and telling me that I was applying Earthly values to Heavenly things. I didn’t understand him at the time, but I learned in time that he was right. It took a long time to realise just how vastly, hugely enormous the Universe is, we simply cannot imagine it. But it exists, at least I believe it does! So when I learn of how certain people in this beautiful Earth are behaving, I think on how their lives will end, new ones will spring up, things will change and I hope, in years to come, we may yet learn to all live peacefully together. You will forgive me if I do not hold my breath on that one though! This world turns, the seasons change, no matter what our individual thoughts or our beliefs may be. There is good in the world, we must believe in it, do all we can, openly and honestly, and be thankful for what we have. We still have a few million years left!

This week…a tale from a few years ago.
I had bought an old Land Rover Series 3 which was quite good, but I found it needed a bit of repair on the steering mechanism. It meant that as I was driving along, rather than steer straight I was correcting it, so the vehicle would seem to almost ‘wander’ from side to side a little! I was driving home one day and was stopped by a local policeman, who stood by the driver’s open window in a way that he could smell my breath – Land Rovers sit quite high up on the road. He asked me if had been drinking or had I only just bought the vehicle. He already knew the answers to both questions, but was checking with me! I assured I had not been drinking but had recently purchased the vehicle, he agreed and even recommended a local garage who specialised in Land Rover repairs. I was advised to get the steering problem attended to as soon as possible and when I took it to this garage, the staff there were sure they actually knew who this policeman was as he himself was the proud owner of a Land Rover and was a regular customer of theirs!

Click: Return to top of page or Index page

Human Aviation

We have for so many years been fascinated by watching birds fly and tried to do so ourselves. There are a few myths and legends of flight and my research has found some entertaining ones – these are just a few of them. According to Greek legend, Bellerophon the Valiant, the son of the King of Corinth, captured Pegasus the winged horse who took him into a battle against the triple headed monster, Chimera. In an Ancient Greek legend, King Minos imprisoned an engineer named Daedalus and with his son Icarus they made wings of wax and feathers. Daedalus flew successfully from Crete to Naples, but Icarus tried to fly too high and flew too near to the sun, so the wings of wax melted and Icarus fell to his death in the ocean. It is also said that King Kaj Kaoos of Persia attached eagles to his throne and flew around his kingdom, whilst Alexander the Great harnessed four great mythical winged animals called Griffins to a basket and flew around his realm. But in fact I understand it was around 400 BC that the Chinese first made kites that could fly in the air and this started us thinking about flying. To begin with, kites were used by the Chinese in religious ceremonies and they built many colourful ones for fun, then later more sophisticated kites were used to test weather conditions. Kites have been as important to the invention of flight as they were the forerunner to balloons and gliders. We have tested our ability to fly by attaching feathers or lightweight wood to our arms to enable us to fly naturally but the results were often disastrous as the muscles of human arms are simply not like the wings of birds and do not have the required strength. But an ancient Greek engineer named Hero of Alexandria worked with air pressure and steam to create sources of power and one of the experiments he developed was the ‘aeolipile’ which used jets of steam to create rotary motion. Hero mounted a sphere on top of a water kettle, a fire below the kettle turned the water into steam and the gas then travelled through pipes to the sphere. Then two L-shaped tubes on opposite sides of the sphere allowed the gas to escape, which gave a thrust to the sphere that caused it to rotate. Leonardo da Vinci made the first real studies of flight in the 1480’s and he had over 100 drawings that illustrated his theories on flight, but his Ornithopter flying machine was never actually created. Though it was a design that he created to show how man could fly and the modern day helicopter is based on this concept. The two brothers Joseph Michel and Jacques Etienne Montgolfier were inventors of the first hot air balloon and they used the smoke from a fire to blow hot air into a silk bag which was attached to a basket. The hot air then rose and allowed the balloon to become lighter than air. In 1783 the first passengers in the colourful balloon were a sheep, rooster and duck. It climbed to a height of about 6,000 feet and travelled more than 1 mile and after this first success, the brothers began to send men up in balloons. The first manned flight was on November 21, 1783, the passengers were Jean-Francois Pilatre de Rozier and Francois Laurent. Meanwhile, George Cayley worked to discover a way that man could fly. He designed many different versions of gliders that used the movements of the body to control and a young boy, whose name is not known, was the first to fly one of his gliders. Over fifty years Cayley made improvements to the gliders, changing the shape of the wings so that the air would flow over them correctly. He also designed a tail for the gliders, to help with the stability. He tried a biplane design to add strength to the glider and recognised that there would be a need for power if the flight was to be in the air for a long time. Cayley also wrote ‘On Ariel Navigation’ which showed that a fixed-wing aircraft with a power system for propulsion and a tail to assist in the control of the airplane would be the best way to allow man to fly. A German engineer, Otto Lilienthal, studied aerodynamics and worked to design a glider that would fly. He was the first person to design a glider that could fly a person and which was able to fly long distances. He was fascinated by the idea of flight. Based on his studies of birds and how they flew, he wrote a book on aerodynamics that was then published in 1889 and this text was used by the Wright Brothers as the basis for their designs. Around the same time, Samuel Langley who was an astronomer realised that power was needed to help man fly. He built a model of an aircraft which he called an ‘aerodrome’ that included a steam-powered engine and in 1891, his model flew for three-quarters of a mile before running out of fuel. Langley then received a $50,000 grant to build a full sized ‘aerodrome’, but it was too heavy to fly and it crashed. He was of course very disappointed at this and gave up trying to fly. His major contributions to flight involved attempts at adding a power plant to a glider, he was well known too as the director of the Smithsonian Institute in Washington, DC in the U.S.A.

A Wright Brothers Unpowered Aircraft.

Orville and Wilbur Wright were very deliberate in their quest for flight. First, they read about all the early developments of flight. They decided to make “a small contribution” to the study of flight control by twisting their wings in flight. Then they began to test their ideas with a kite. They learned about how the wind would help with the flight and how it could affect the surfaces once up in the air and using a methodical approach concentrating on the controllability of the aircraft, the brothers built and tested a series of kite and glider designs from 1898 to 1902 before attempting to build a proper powered design. The gliders worked, but not as well as the Wrights had expected based on the experiments and writings of their predecessors. Their first full-size glider, launched in 1900, had only about half the lift they anticipated. Their second glider, built the following year, performed even more poorly, but rather than giving up, the Wrights constructed their own wind tunnel and created a number of sophisticated devices to measure lift and drag on the 200 wing designs they tested. As a result, the Wrights corrected earlier mistakes in their calculations and along with much testing and calculating they produced a third glider with a higher aspect ratio and true three-axis control. They flew it successfully hundreds of times in 1902, and it performed far better than the previous models. The next step was to test the shapes of gliders much like George Cayley did when he was testing the many different shapes that would fly. Finally, with a perfected glider shape, they turned their attention to how to create a propulsion system that would create the thrust needed to fly. The early engine that they designed generated almost 12 horsepower, that is the same power as two hand-propelled lawn mower engines! The “Flyer” lifted from level ground to the north of Big Kill Devil Hill, North Carolina, at 10:35 a.m., on December 17, 1903. Orville piloted the plane which weighed about six hundred pounds. The first heavier than air flight traveled one hundred twenty feet in twelve seconds. The two brothers took turns flying that day with the fourth and last flight covering 850 feet in 59 seconds, but the Flyer was unstable and very hard to control. The brothers returned to Dayton, Ohio, where they worked for two more years perfecting their design and finally, on October 5, 1905, Wilbur piloted the Flyer III for 39 minutes and for about 24 miles in circles around Huffman Prairie. He flew the first practical aircraft until it ran out of fuel. By using a rigorous system of experimentation, involving wind-tunnel testing of airfoils and flight testing of full-size prototypes, the Wrights not only built a working aircraft the following year but also helped advance the science of aeronautical engineering. The brothers appear to have been the first to make serious studied attempts to simultaneously solve both the power and control problems. These problems proved difficult, but they never lost interest, eventually solving them. Then, almost as an afterthought, they designed and built a low-powered internal combustion engine. They also designed and carved wooden propellers that were more efficient than any before, enabling them to gain adequate performance from their low engine power. Whilst many aviation pioneers appeared to leave safety largely to chance, the Wrights’ design was greatly influenced by the need to teach themselves to fly without unreasonable risk to life and limb, by surviving crashes! This emphasis, as well as low engine power, was the reason for low flying speed and for taking off in a headwind. Performance, rather than safety, was the reason for the rear-heavy design because the wing designs made the aircraft less affected by crosswinds and easier to fly. Since then, many new aeroplanes along with different engines have been developed to help transport people, luggage, cargo, military personnel and weapons around the globe, but their advances were all based on these first flights by the Wright Brothers.

The Wright Flyer, the first sustained flight with a powered, controlled aircraft.

In fact the history of aviation extends for more than two thousand years, from the earliest forms such as kites, even attempts at tower jumping all the way through to supersonic flight by powered, heavier-than-air jets. The discovery of hydrogen gas in the 18th century led to the invention of the hydrogen balloon at almost exactly the same time that the Montgolfier brothers rediscovered the hot-air balloon and began manned flights. With various theories in mechanics by physicists during the same period of time, notably fluid dynamics and Newton’s Laws of Motion led to the foundation of modern aerodynamics. Balloons, both free-flying and tethered, began to be used for military purposes from the end of the 18th century, with the French government establishing Balloon Companies during the Revolution. Experiments with gliders provided the groundwork for heavier-than-air craft and by the early 20th century advances in engine technology and aerodynamics made controlled, powered flight possible for the first time. The modern aeroplane with its characteristic tail was established by 1909 and from then on its history became tied to the development of more and more powerful engines. The first great ships of the air were the rigid dirigible balloons pioneered by Ferdinand Von Zeppelin, a name which soon became synonymous with airships and dominated long-distance flight until the 1930s, when large flying boats became popular. The ‘pioneer’ era from 1903 to 1914 also saw the development of practical aeroplanes and airships and their early application, alongside balloons and kites, for private, sport and military use. Eventually though, flight became an established technology and over a period of a few years more controls were added, providing a recognition of powered flight as something other than the preserve of dreamers and eccentrics. Such things as ailerons, also radio-telephones and guns were included and it was not long before aircraft were shooting at each other, but the lack of any sort of steady point for the gun was a problem. The French solved this problem when, in late 1914, Roland Garros attached a fixed machine gun to the front of his aircraft. Aviators were styled as modern-day knights, doing individual combat with their enemies. Several pilots became famous for their air-to-air combat, the most well known being Manfred von Richthofen, better known as the ‘Red Baron’, who shot down eighty planes in air-to air combat using several different planes, the most celebrated of which was a red triplane, that being one fitted with three wings. France, Britain, Germany, and Italy were the leading manufacturers of fighter planes that saw action during the war, then in the years between the two World Wars there was really great advancements in aircraft technology. Aircraft evolved from low-powered biplanes and triplanes made from wood and fabric to sleek, high-powered monoplanes made of aluminium, based primarily on the founding work of Hugo Junkers during the World War I and its adoption by other designers. As a result, the age of the great rigid airships came and went. The first successful flying machines that used rotary wings appeared in the form of the autogyro which was first flown in 1919. In that design, the rotor is not powered but is spun like a windmill by its passage through the air whilst a separate power-plant is used to propel the aircraft forwards. Helicopters were developed and in the 1930s, development of the jet engine began in Germany and in Britain and both countries would go on to develop jet aircraft by the end of World War II. This era saw a great increase in the pace of development and production, not only of aircraft but also the associated flight-based weapon delivery systems. Air combat tactics and doctrines took advantage. Large-scale strategic bombing campaigns were launched, fighter escorts introduced and the more flexible aircraft and weapons allowed precise attacks on small targets with various types of attack aircraft. Also, new technologies like radar allowed more coordinated and controlled deployment of air defence.

Messerschmitt Me262, the first operational jet fighter.

The first jet aircraft to fly was the German Heinkel He178 in 1939, followed by the world’s first operational jet aircraft, the Me262 in July 1942. British developments like the Gloster Meteor followed afterwards, but these saw only brief use in World War II. Also, jet and rocket aircraft had only limited impact due to their late introduction, fuel shortages, also the real lack of experienced pilots as well as the declining war industry of Germany. In the latter part of the 20th century, the advent of digital electronics produced great advances in flight instrumentation and “fly-by-wire” systems with the 21st century bringing the large-scale use of pilotless drones for military, civilian and leisure use, also inherently unstable aircraft such as ‘flying wings’ becoming possible with their use of digital controls.

The DeHavilland Comet, the world’s first jet airliner which also saw service in the Royal Air Force.

Also after World War II, commercial aviation grew rapidly, using mostly ex-military aircraft to transport people and cargo. By 1952, the British Overseas Aircraft Corporation (BOAC) had introduced the Comet into their scheduled service. Whilst a technical achievement, the plane suffered a series of highly public failures as the shape of its windows led to cracks due to metal fatigue. The fatigue was caused by cycles of pressurisation and depressurisation of the cabin and eventually led to catastrophic failure of the plane’s fuselage. By the time the problems were overcome by making the windows oval rather than square-shaped, other jet airliner designs had already taken to the skies. Much more could be written here about the changes, including the ‘jet age’, supersonic flight, even getting into space but I think that will be for another time. Suffice to say that 21st-century aviation has seen increasing interest in fuel savings and fuel diversification, as well as low-cost airlines and facilities. Also, much of the developing world that did not have good access to air transport has been steadily adding aircraft and facilities, though severe congestion remains a problem in many up and coming nations. But we continue to strive, to develop. On 19 April 2021, the National Aeronautical Space Administration (NASA) flew successfully an unmanned helicopter on Mars, making it humanity’s first controlled powered flight on another planet. ‘Ingenuity’ rose to a height of 3 metres and hovered in a stable holding position for 30 seconds, after a vertical take-off that was filmed by its accompanying rover, ‘Perseverance’. Then on 22 April 2021, ‘Ingenuity’ made a second, more complex flight, which was also observed by ‘Perseverance’. As an homage to all of its aerial predecessors, the ‘Ingenuity’ helicopter carries with it a very small, postage-stamp sized fragment from the wing of the 1903 Wright Flyer.

This week…
It just goes to show how some historical events aren’t remembered. Back in 2016, on the television quiz show ‘Pointless’, a relatively young contestant chose to answer the question “Who was assassinated by Lee Harvey Oswald in Dallas?”. They answered, somewhat hesitatingly, “J.R.?”, meaning J.R. Ewing from an American television soap opera which was aired on American tv from 1978 to 1991. The correct answer was John F. Kennedy, the former president of the United States, on November 22, 1963.

Click: Return to top of page or Index page

Transportation By Road

Road transport started with the development of tracks by us and our ‘beasts of burden’. The first forms of road transport were horses and oxen which were used for carrying goods over tracks that often followed game trails for food along routes such as the Natchez Trace, a historic forest trail within the United States of America. That trail extends roughly 440 miles from Nashville, Tennessee to Natchez, Mississippi and links the Cumberland, Tennessee and Mississippi rivers. It was created and used by Native Americans for centuries and later used by early European as well as American explorers, traders and emigrants in the late 18th and early 19th centuries. European Americans founded inns along the Trace to serve food and lodging to travellers, however as travel shifted to steamboats on the Mississippi and other rivers, most of these inns were closed. Today the path is commemorated by a parkway which uses the same name as well as following the approximate path of the Trace. In the Palaeolithic Age, we did not need constructed tracks in open country and the first improved trails would have been at fords, mountain passes and through swamps. The first improvements were made by clearing trees and big stones from the path and as commerce increased, the tracks were often flattened or widened to more easily accommodate human and animal traffic. Some of these dirt tracks were developed into fairly extensive networks, thereby allowing for communications, trade and governance over wider areas. The Incan Empire in South America and the Iroquois Confederation in North America, neither of which had the wheel at that time, are examples of effective use of such paths. The first transportation of goods was made on human backs and heads, but the use of pack animals, including donkeys and horses, was developed during the Neolithic Age. The first vehicle is believed to have been the travois, from the French ‘travail’, a frame for restraining horses.

Cheyenne using a Travois.

Travois were probably used in other parts of the world before the invention of the wheel and developed in Eurasia after the first use of bullocks for the pulling of ploughs. In about 5000 BC, sleds were developed which are more difficult to build than travois, but are easier to propel over smooth surfaces. Pack animals, ridden horses and bullocks dragging travois or sleds require wider paths and higher clearances than people on foot and so improved tracks were required. By about 5000 BC, proper roads were developed along ridges in England to avoid crossing rivers and getting bogged down. Travellers have used these ridgeways for a great many years and the Ridgeway National Trail itself follows an ancient path from Overton Hill near Avebury to Streatley. It then follows footpaths and parts of the ancient Icknield Way through the Chiltern Hills to Ivinghoe Beacon in Buckinghamshire. Ridgeways provided a reliable trading route to the Dorset coast and to the Wash in Norfolk as the high and dry ground made travel easy and provided a measure of protection by giving traders a commanding view, warning against potential attacks. During the Iron Age, the local inhabitants took advantage of the high ground by building hill-forts along the Ridgeway to help defend the trading route. Then, following the collapse of Roman authority in Western Europe, the invading Saxon and Viking armies used it. In medieval times and later, the ridgeways found use by drovers, moving their livestock from the West Country and Wales to markets in the Home Counties and London. Before the Enclosure Acts of 1750, or to use the archaic spelling ‘Inclosure Acts, covered the enclosure of open fields and common land in England and Wales, creating legal property rights to land previously held in common. Between 1604 and 1914, over 5,200 individual enclosure acts were passed, affecting just under 11,000 square miles. Before these enclosures in England, a portion of the land was categorised as ‘common’ or ‘waste’ and whilst common land was under the control of the lord of the manor, certain rights on the land such as pasture, pannage or estovers (an allowance of land made to a person out of an estate, or other thing for their support) were held variously by certain nearby properties, or occasionally ’in gross’ by all manorial tenants. ‘Waste’ was land without value as a farm strip, often very narrow areas (typically less than a yard wide) in difficult locations such as cliff edges, or awkwardly shaped manorial borders but also bare rock. Waste was not officially used by anyone, and so was often farmed by landless peasants. The remaining land was organised into a large number of narrow strips, each tenant possessing a number of strips throughout the manor and what might now be termed a single field would have been divided under this system amongst the lord and his tenants, whilst poorer peasants were allowed to live on the strips owned by the lord in return for cultivating his land. The system facilitated common grazing and crop rotation. Once enclosures started, the paths developed through the building of earth banks and the planting of hedges.

A Greek street – 4th or 3rd century BC.

Wheels appear to have been developed in ancient Sumer in Mesopotamia around 5000 BC, perhaps originally for the making of pottery. Their original transport use may have been as attachments to travois or sleds to reduce resistance. Most early wheels appear to have been attached to fixed axles, which would have required regular lubrication by animal fats, vegetable oils or separation by leather to be effective. The first simple two-wheel carts, apparently developed from travois, appear to have been used in Mesopotamia and northern Iran in about 3000 BC and two-wheeled chariots appeared in about 2800 BC. They were hauled by onagers, an Asiatic wild ass related to donkeys. Heavy four-wheeled wagons were then developed about 2500 BC but which were only suitable for oxen-haulage and were therefore only used where crops were cultivated. Two-wheeled chariots with spoked wheels appear to have been developed around 2000 BC by the Andronovo culture in southern Siberia and Central Asia and at much the same time the first primitive harness was invented, enabling horse-drawn haulage. Wheeled-transport created the need for better roads, as natural materials were generally not found to be both soft enough to form well-graded surfaces and strong enough to bear wheeled vehicles, especially when it was wet, and stay intact. In urban areas it became worthwhile to build stone-paved streets and the first paved streets appear to have been built around 4000 BC. Log roads, made by placing logs perpendicular to the direction of the road over a low or swampy areas resulted in an improvement over impassable mud or dirt roads, but rough in the best of conditions and a hazard to horses due to the shifting of loose logs. These log roads were built in Glastonbury in 3300 BC and brick-paved roads were built in the Indus Valley on the Indian subcontinent from around the same time. Then improvements in metallurgy meant that by 2000 BC, stone-cutting tools were generally available in the Middle East and Greece allowing local streets to be paved. In 500 BC Darius the Great started an extensive road system for Persia, including the famous Royal Road which was one of the finest highways of its time and which was used even after Roman times. Because of the road’s superior quality, mail couriers could travel almost 1,700 miles in seven days.

A map of Roman roads in 125CE.

With the advent of the Roman Empire, there was a need for armies to be able to travel quickly from one area to another, and existing roads were often muddy, which greatly delayed the movement of large masses of troops. To solve this problem, the Romans built great roads which used deep roadbeds of crushed stone as an underlying layer to ensure that they kept dry, as the water would flow out from the crushed stone, instead of becoming mud in clay soils. The legions made good time on these roads and some are still used now. On the more heavily travelled routes, there were additional layers that included six sided capstones, or pavers, that reduced the dust and reduced the drag from wheels. These pavers allowed the Roman chariots to travel very quickly, ensuring good communication with the Roman provinces. Farm roads were often paved first towards town, to keep produce clean. Early forms of springs and shock absorbers to reduce the bumps were incorporated in horse-drawn transport, as the original pavers were sometimes not perfectly aligned. But Roman roads deteriorated in medieval Europe because of a lack of resources and skills to maintain them. The alignments are still partially used today though, like on areas of our A1 road. The earliest specifically engineered roads were built during the British Iron Age and the road network was expanded during the Roman occupation. New roads were added in the Middle Ages, from the 17th century onwards and as life slowly developed and became richer, especially with the Renaissance, new roads and bridges began to be built, often based on Roman designs. More and more roads were built, but responsibility for the state of the roads had lain with the local parish since Tudor times. Then in 1656 the parish of Radwell, Hertfordshire petitioned Parliament for help in order to maintain their section of the Great North Road. Parliament passed an act which gave the local justices powers to erect toll-gates on a section of the Great North Road, between Wadesmill in Hertfordshire, Caxton in Cambridgeshire and Stilton in Huntingdonshire for a period of eleven years and the revenues so raised were to be used for the maintenance of the Great North Road in their jurisdictions. The toll-gate erected at Wadesmill became the first effective toll-gate in England. Then came the Turnpike Act in 1707, beginning with a section of the London to Chester road between Fornhill and Stony Stratford. The idea was that the trustees would manage resources from the several parishes through which the highway passed, augment this with tolls from users from outside the parishes and apply the whole to the maintenance of the main highway. This became the pattern for a growing number of highways to have tolls on them and was sought by those who wished to improve the flow of commerce through their part of a county. At the beginning of the 18th century, sections of the main radial roads into London were put under the control of individual turnpike trusts. The pace at which new turnpikes were created picked up in the 1750s as trusts were formed to maintain the cross-routes between the Great Roads radiating from London. Roads leading into some provincial towns, particularly in Western England, were put under single trusts and key roads in Wales were then turnpiked. In South Wales, the roads of complete counties were put under single turnpike trusts in the 1760s. Turnpike trusts grew, such that by 1825 about 1,000 trusts controlled 18,000 miles of road in England and Wales. Interestingly, from the 1750s these Acts required trusts to erect milestones indicating the distance between the main towns on the road. Users of the road were obliged to follow what were to become rules of the road, such as driving on the left and not damaging the road surface. Trusts could also take additional tolls during the summer to pay for watering the road in order to lay the dust thrown up by fast-moving vehicles. Parliament then passed a few general acts dealing with the administration of the trusts along with the restrictions on the width of wheels, as narrow wheels were said to cause a disproportionate amount of damage to the road. Construction of roads improved slowly, initially through the efforts of individual surveyors such as John Metcalf in Yorkshire in the 1760s. British turnpike builders began to realise the importance of selecting clean stones for surfacing, and excluding vegetable material and clay to make better lasting roads. Later, after the ending of the turnpike trusts, roads were funded from taxation and so it was that gradually a proper network of roadways was developed in Britain in order to supplement the use of rivers as a system of transportation and many of these roadways were developed as a result of trading of goods and services, such as wool, sheep, cattle and salt as they helped link together market towns as well as harbours and ports. Other roadways were developed to meet the needs of pilgrims visiting shrines such as Walsingham, even to the transporting of corpses from isolated places to local graveyards. Also during medieval England were built the “Four Highways” and Henry of Huntingdon wrote that the Ermine Street, Fosse Way, Watling Street and Icknield Way were constructed by royal authority. Two new vehicle duties were introduced, the ‘Locomotive duty’ and the ‘Trade Cart duty’ in the 1888 budget and since 1910, the proceeds of road vehicle excise duties have been dedicated to fund the building and maintenance of the road system. From 1920 to 1937, most roads in the United Kingdom were funded from this Road Fund using taxes raised from fuel duty and Vehicle Excise duty but since 1937, roads have been funded from general taxation with all motoring duties, including VAT, being paid directly to the Treasury. Tolls or congestion charges are still used for some major bridges and tunnels, for example the Dartford Crossing has a congestion charge. The M6 Toll road, originally the Birmingham Northern Relief Road, is designed to relieve the M6 through Birmingham as the latter is one of the most heavily used roads in the country. There were two public toll roads, Roydon Road in Stanstead Abbots, Hertfordshire and College Road in Dulwich, London and about five private toll roads. However, since 2006 congestion charging has been in operation in London and Durham. Before 14 December 2018, the M4’s Second Severn Crossing, officially ‘The Prince of Wales Bridge’ included tolls, but after being closed for three days for toll removal the bridge opened up again on 17 December 2018 starting with a formal ceremony and toll payment was scrapped. It made its mark in history as it is believed to be the first time in 400 years that the crossing was free!

After the election of the Labour government in 1997, most existing road schemes were cancelled and problem areas of the road network were then subjected to a range of studies to investigate non-road alternatives. In 1998 it was proposed to transfer parts of the English trunk road network to local councils, retaining central control for the network connecting major population centres, ports, airports, key cross-border links and the Trans-European Road Network. Since then, various governments have continued to implement new schemes to build new roads and widen existing ones as well as review other transport infrastructures because between 1980 and 2005 traffic increased by 80%, whilst road capacity increased by just 10%. Naturally, concern has been raised, especially in terms of damage to the countryside. Also, on 4 June 2018, a change in the law meant that learner drivers, who had previously been banned from driving on motorways, were allowed to use them when accompanied by a driving instructor in a car with dual controls. Because motorway driving is not offered as part of the practical driving test in the United Kingdom, these measures were put in place in an effort to teach motorway safety.

As so often happens when researching a subject such as this, the more one finds it seems the more there is to be found! Suffice to say that in addition to the above there is so much more that can be said about transportation by road, but I think this is enough for now!

This week…
Did you hear about the man who began a career by writing dirty jokes, but then went on to create proper poetry? He went from bawd to verse…

Click: Return to top of page or Index page


Whilst browsing through websites for some information, I found the following question which was: “Will aliens have similar mathematics and natural science in this universe if they exist? One answer given was “Not just similar, but identical. Their bit of universe will likely be exactly like ours, and if they explore it, they will arrive to the same description of it as we do. If they are able to come here, their physics are likely to be more advanced than ours, but everything we have in common will be identical. They may or may not use a different base in mathematics than we do. We commonly use base 10, but the principles of mathematics is actually independent of the base, and we can as easily do the same mathematics with base 2 (computers already do that), 16 (people often use that as a compromise when talking to computers) or 20 (ancient Mayans, for instance). It’s all down to what axioms they decide to use, and if they want their mathematics to be useful to describe physics, they have to use the same axioms as well.” There are also a few folk here on Earth with crazy ideas and one person asked if there is a risk that hostile aliens could find the location of Earth and invade. Of course, the question really is how big is the risk – because of course it will never be demonstrably zero. It is impossible to prove something doesn’t exist, even when it’s as intangible as a risk. Therefore, it only makes sense to look at the factors that decrease the risk – at what makes it unlikely that hostile aliens could find the location of Earth and invade. The following is a reply given by a scientist. “First, consider interstellar separation. Our current knowledge of physics implies that nothing can travel faster than light and anything which does approach that speed suffers massively from the effects of time dilation. So either the aliens will take tens of thousands of years to travel from star to star, or time dilation makes it a one-way trip because if they return to their home planet, it will be tens of thousands of years older than when they left. The distance they must cover is mind-bogglingly huge, and the trip is expensive, dangerous, and long – unless they’ve cracked the light-speed barrier, which is very unlikely. Next, the likelihood of Earth being a useful target is low because there are few planets that are even similar to, let alone the same as, another. The aliens would have to find the Earth to be the most viable source of something valuable to them, even though their planet is vastly different from Earth. Their needs, through evolution, will match what exists on their own planet rather than here. Volatiles (hydrogen, methane, etc.) are easier to gather from gas giants and moons, metals are easier to mine from asteroids and comets. Of course, even if there are aliens that find our planet useful, they could be on the other side of the galaxy rather than anywhere near Sol. It’s likely they’ll never find us in the galactic forest or through all the clutter of gas, other systems, and so on. But for what purpose would they be hostile? There is as much if not more chance they would be indifferent, or helpful. Why travel across interstellar distances just to pick a fight? Following on from that, with great intelligence comes great insight and inquisitiveness and the effort towards scientific advancement. These things tend to replace or least greatly diminish the initial basic instincts of fear, suspicion and violent tendencies. Finally, these aliens need to exist in the same time period we do. The universe and our galaxy have great age – a long past covering billions of years, and an equally long future. We have existed for a mere eye-blink of time. The aliens probably wouldn’t arrive until long after we leave, if we ever learn the secret of getting around from star to star like they do. Either that, or they arrived before we existed and moved on. With all those factors counting against invasion, it seems there’s a very low risk”. I also recall an episode of ’Star Trek – The Next Generation’ which involved languages. In it, Deanna Troi, the ship’s counsellor, picked up what to you and I would be a drinking cup. But she pointed out to the captain, Jean-Luc Picard, that were he to show this item to someone from a different galaxy, they might perceive the cup in quite a different manner. For example, they might see it as a treasured item, to be revered, something originally owned or used by a great ruler. Or it might be symbolic, an item shown one to another to demonstrate overcoming an enemy and in that way creating a friendship between nations. Then again, it might be an item for two leaders to drink from, thus sharing an agreement. Different countries on Earth use language as it is a structured system of communication used by us humans. Languages can be based on speech and gesture, it can be spoken, by sign or written. The structure of language is its grammar and the components are its vocabulary. Many of our languages, including the most widely-spoken ones, have writing systems that enable sounds or signs to be recorded for future use. Our language is unique among the known systems of animal communication in that it is not dependent on a single mode of transmission (sight, sound, etc.), it is highly variable between cultures and across time, it also affords a much wider range of expression than other systems. Human languages have the properties of productivity and displacement, they also rely on social convention and learning. Estimates of the number of human languages in the world vary between 5,000 and 7,000, though precise estimates depend on an arbitrary distinction being established between languages and dialects. Natural languages are spoken, signed or both. However, any language can be encoded into secondary media using auditory, visual, or tactile stimuli, for example writing, whistling, signing, signalling or braille.

The English word ‘language’ derives ultimately from a Proto-Indo-European tongue through Latin and Old French. The word is sometimes used to refer to codes, ciphers and other kinds of artificially-constructed communication systems such as ways used for computer programming. Over the years there have been attempts to define what language is and one definition sees language primarily as the mental faculty that allows humans to undertake linguistic behaviour, to learn languages and to produce and understand utterances. This definition stresses the universality of language to all humans, and it emphasises the biological basis for the human capacity for language as a unique development of the human brain. But another definition sees language as a formal system of signs governed by grammatical rules in combination to communicate meaning. This definition stresses that human languages can be described as closed, structural systems consisting of rules that relate particular signs directly to particular meanings.

A conversation in American Sign Language.

Throughout history, humans have speculated about the origins of language but interestingly theories about the origin of language differ in regard to their basic assumptions about what language actually is. Some theories are based on the idea that language is so complex that one cannot imagine it simply appearing from nothing in its final form, but that it must have evolved from earlier pre-linguistic systems among our pre-human ancestors. The opposite viewpoint is that language is such a unique human trait that it cannot be compared to anything found among non-humans and that it must therefore have appeared suddenly in the transition from pre-hominids to early man. Because language emerged in the early prehistory of man, before the existence of any written records, its early development has left no historical traces, and it is believed that no comparable processes can be observed today. Theories which stress continuity of language often look at animals to see if, for example, primates display any traits that can be seen as analogous to what pre-human language must have been like and to this end, early human fossils have been inspected for traces of physical adaptation to language use or pre-linguistic forms of symbolic behaviour. Among the signs in human fossils that may suggest linguistic abilities are the size of the brain relative to body mass, the presence of a larynx which is capable of advanced sound production as well as the nature of tools and other manufactured artefacts. The formal study of language is often considered to have started in India with Pānini, a 5th century BC scholar of grammar who formulated 3,959 rules of Sanskrit. However, Sumerian scribes already studied the differences between Sumerian and Akkadian grammar around 1900 BC. Subsequent grammatical traditions developed in all of the ancient cultures that adopted writing. In the 17th century AD, the French developed the idea that the grammars of all languages were a reflection of the universal basics of thought, and therefore that grammar was universal. Spoken language relies on the human physical ability to produce sound, a longitudinal wave propagated through the air at a frequency capable of vibrating the ear drum. This ability depends on the physiology of the human speech organs. These organs consist of the lungs, the voice box (larynx) and the upper vocal tract – the throat, the mouth, and the nose. By controlling the different parts of the speech apparatus, the airstream can be manipulated to produce different speech sounds. Some of these speech sounds, both vowels and consonants, involve release of air flow through the nasal cavity. Other sounds are defined by the way the tongue moves within the mouth such as the l-sounds, called laterals as the air flows along both sides of the tongue, and the r-sounds. By using these speech organs, humans can produce hundreds of distinct sounds. Some appear very often in the world’s languages, whilst others are more common in particular language families, areas, or even specific to a single language.

An ancient Tamil inscription at Thanjavur.

But languages express meaning by relating a sign form to a meaning, or its content. Sign forms must be something that can be perceived, for example, in sounds, images, or gestures, and then related to a specific meaning by social convention. Because the basic relation of meaning for most linguistic signs is based on social convention, linguistic signs can be considered arbitrary, in the sense that the convention is established socially and historically, rather than by means of a natural relation between a specific sign form and its meaning. As a result, languages must have a vocabulary of signs related to specific meaning. The English sign “dog” denotes, for example, a member of the species ‘Canis Familiaris’. Depending on its type, language structure can be based on systems of sounds (speech), gestures (sign languages), or graphic or tactile symbols (writing). All spoken languages use segments such as consonants or vowels, many use sound in other ways to convey meaning, like stress, pitch and duration of tone whilst writing systems represent language using visual symbols, which may or may not correspond directly to the sounds of spoken language. Because all languages have a very large number of words, no purely logographic scripts are known to exist, although the best-known examples of a logographic writing system are Chinese and Japanese. Written language represents the way spoken sounds and words follow one after another by arranging symbols (letters, numbers, etc) according to a pattern that follows a certain direction. The direction used in a writing system is entirely arbitrary and established by convention. Some writing systems use the horizontal axis (left to right as the Latin script, or right to left as the Arabic script), whilst others such as traditional Chinese writing use the vertical dimension (from top to bottom). A few writing systems use opposite directions for alternating lines, and others, such as the ancient Maya script, can be written in either direction and rely on graphic cues to show the reader the direction of reading. In order to represent the sounds of the world’s languages in writing, linguists have developed the International Phonetic Alphabet which is designed to represent all of the discrete sounds that are known to contribute to meaning in human languages.

The Basic Structure of an English Sentence.

It is not realistically possible in this blog post for me to go into such things as grammar, parts of speech, word classes and syntax, especially as languages differ so widely in how much they rely on processes of word formation. For example, an English sentence can be analysed in terms of grammatical functions, like “The cat” is the subject of the phrase, “on the mat” is a locative phrase, and “sat” is the core of the predicate. Another way in which languages convey meaning is through the order of words within a sentence. The grammatical rules, or syntax, determine why a sentence in English such as “I love you” is meaningful, but “love you I” is not. Syntactical rules determine how word order and sentence structure is constrained, and how those constraints contribute to meaning. For example, in English, the two sentences “the slaves were cursing the master” and “the master was cursing the slaves” mean different things, because the role of the grammatical subject is encoded by the noun being in front of the verb, and the role of object is encoded by the noun appearing after the verb. What can make other languages difficult to learn is because the above rules may be different through other languages! I will not go into detail over these things or aspects like the ‘accusative case’ and the ’nominative case’ which are far beyond me! Suffice to say it has been found that whilst we have the ability to learn any language, we do so if we grow up in an environment in which that language exists and is used by others. Language is therefore dependent on communities of speakers, most usually where children learn language from their elders and peers and they themselves transmit language to their own children.

Owing to the way in which language is transmitted between generations and within communities, language perpetually changes, diversifying into new languages or converging due to contact with others. The process is similar to the process of evolution, but languages differ from biological organisms in that they readily incorporate elements from other languages through the process of diffusion, as speakers of different languages come into contact. Humans also frequently speak more than one language, often acquiring their first language or languages as children, then learning new languages as they grow up. Because of the increased language contact in our globalising world, many small languages are becoming endangered as their speakers shift to other languages that then afford the possibility to participate in larger and more influential speech communities. For a while, the Welsh language feared to be was dying out but happily more and more people are speaking it, as well as it being taught. Some years ago I learned a few words of Welsh and was amazed to find how similar some words in that language were to other languages, for example French. I have also had a look at Old English, but as my research proved, despite Old English being the direct ancestor of modern English, it is almost unintelligible to contemporary English speakers.

The first page of the poem Beowulf, written in Old English in the early medieval period (800–1100 AD).

To finish this week, I have included a point which actually relates to the main text above, but which I feel is quite humorous and it is this.
Many languages have grammatical conventions that signal the social position of the speaker in relation to others, like saying “your honour” when addressing a judge. But in one Australian language, a married man must use a special set of words to refer to everyday items when speaking in the presence of his mother-in-law…

Click: Return to top of page or Index page

Change Is All Around Us

Every second of the day, things change. Some lives begin, some lives end, new ideas surface whilst other things fall out of use. A little while ago I saw a tv item about a person who was considering ending his life by jumping off a motorway bridge. Thankfully they were persuaded not to, it was clearly a call for help. But in another instance one person sadly did end their life and the police had to close the road for several hours until absolutely all evidence of the tragedy had been cleared away. It was sad to learn of this, but also sad to learn that some people were found simply sitting in their cars until the motorway could be re-opened and were angry and frustrated by the delays this event had caused. It seems that some people get upset and annoyed about things that they cannot control, just as when the day dawns and the rain falls. Surely we should do our very best to cope with that change. In the latter instance there were queues of traffic on the motorway, perhaps cars getting low on fuel along with children getting fractious, lorry drivers having to park up because of the hours they were allowed to drive, people missing holiday flights or perhaps even cruises. Things happen that we can deal with, whilst at other times we cannot. As a family we all enjoyed going on holiday and getting to North Devon was considered part of that holiday. We saw places we would otherwise not have known about, I was taught map-reading and learned a good sense of direction. When traffic jams occurred we looked around at other vehicles, learning makes and models, identifying registration plates to see where they were registered and how old they were. It all helped to pass the time. As time went on and I got older, I did my level best to try to minimise the stresses and strains of whatever I was doing, at least as best I could. I would plan ahead, managing the things that I was able to and not getting wound up over things that I could not reasonably control. So far as holidays were concerned, I would pack my bags the night before. If I was going abroad, perhaps flying from London, I would go down to a nearby hotel the night before. On the flight, where I could I noted where we were, although I will admit that on journeys back from the U.S.A. I tried to get a flight that left at a time such that I was back at Heathrow in the morning! It meant that I was able to sleep for much of the flight and was refreshed when we landed. I tried to make the best of my circumstances. Likewise on my lovely cruise holiday I went down to Southampton the day before the cruise began, so I would be there in good time and not be delayed or unduly stressed. The weather during much of the cruise was very good, so it wasn’t often that the sea was rough. I became used to that, in fact the gentle rocking movement was quite relaxing. At least I considered it that way, sadly a few of the other passengers weren’t quite so comfortable. But they were the ones who also wanted air-conditioned coaches on our bus tours and not all places had those. Some folk became quite agitated, angry even. Over the years I have seen how both stress and worry affects different people in vastly different ways. Some would always see the negative side to a situation, others a very positive one and a few had a balanced view of things. Something I was taught many years ago and really liked was a prayer that I later found out was known as the Serenity Prayer, written by the American theologian Reinhold Niebuhr (1892–1971). It is commonly quoted as follows:

Serenity Prayer.

It seems to me, especially since being in the Care Home I am presently in, how we can so easily lose sight of what one might consider to be the ‘bigger picture’ and concentrate too much on the minor things that are important but not quite as vital. I recall a very good film where some people found themselves stuck in a lift which had stopped between floors. One person decided that they were going to ask his girlfriend to marry him, whilst others had similar positive thoughts so all but one person waited patiently for help to come to get them freed. Except this one dear lady who wanted there to be immediate action. She may not have liked what was happening to her as she was not in control of what was happening, but others calmed her down. She finally sat down and frantically searched right through her handbag, calling out “Where are my Tic-Tacs???”. She could not grasp why everyone else was looking at her… At various points in our lives I am sure that all of us will have various difficulties to overcome. It may be within ourselves, with a relative or a friend. It is never easy at such times to simply stop, take a deep breath, then consider what options we have. In this Care Home there are some inmates who have dementia, they are unable to think rationally or logically. One inmate, sadly no longer alive now, would go around the place ’tidying up’, moving things around. Except they moved such things as ‘wet floor’ notices, which meant other inmates could wander around and slip on a wet floor. Covid-19 has been a real problem as many of the inmates get into a routine, which ordinarily is good, but when they need to be isolated for a while rather than mix with others in the dining room or tv lounge, they have difficulty in understanding. I have learned that dementia does make some folk behave like young children. Equally, some want certain things laid out in a particular way, like pot-plants, but due to the inmate’s age the plants are sometimes knocked over and so the soil goes everywhere. It is also for that reason that most inmates have meals together the dining rooms, as it is easier for Carers to tend to them. Some inmates need bibs, others are coaxed into eating, though I know in my case I have had to be careful how much I eat because the food is good and I am sometimes given too much!

I have said before about following rules and regulations, in particular how important it is that we follow them. In the early days of train transport and other motor vehicles, especially where certain rules and regulations were put in place. Over the centuries of the human race we have had rules and regulations and a major one is quite well-known, this being the Code of Hammurabi. It is a Babylonian legal text which was composed c. 1755–1750 BC. It is the longest, best-organised, and best-preserved legal text from the ancient Near-East and is written in the Old Babylonian dialect of Akkadian and is purported to have been written by Hammurabi, sixth king of the First Dynasty of Babylon. The primary copy of the text is inscribed on a basalt or diorite ‘stele’ (plural stelae), some 7ft 4 1⁄2in (2.25m) tall. A stele (pronounced ’Stee-Lee) or occasionally ‘stela’ when derived from Latin, is a stone or wooden slab, generally taller than it is wide, erected in the ancient world as a monument. The surface of the stele often has text, ornamentation, or both and these may be inscribed, carved in relief, or painted. Stelae were created for many reasons. Grave stelae were used for funerary or commemorative purposes. Stelae as slabs of stone would also be used as Ancient Greek and Roman government notices or to mark border or property lines. They were also occasionally erected as memorials to battles. For example, along with other memorials, there are more than half-a-dozen steles erected on the battlefield of Waterloo at the locations of notable actions by participants in battle. Traditional Western gravestones may technically be considered the modern equivalent of ancient stelae, though the term is very rarely applied in this way. Equally, stele-like forms in non-Western cultures may be called by other terms, and the words ‘stele’ and ‘stelae’ are most consistently applied in archeological contexts to objects from Europe, the ancient Near East and Egypt, China, as well as Pre-Columbian America. The stele showing the Code of Hammurabi was discovered in 1901 at the site of Susa in present-day Iran, where it had been taken as plunder six hundred years after its creation. The text itself was copied and studied by Mesopotamian scribes for over a millennium. The stele now resides in the Louvre Museum. The top of the stele features an image in relief of Hammurabi with Shamash, the Babylonian sun-god and god of justice. Below the relief are about 4,130 lines of cuneiform text, one fifth contains a prologue and epilogue in poetic style, whilst the remaining four-fifths contain what are generally called the laws. In the prologue, Hammurabi claims to have been granted his rule by the gods “to prevent the strong from oppressing the weak”. The laws are in a ‘casuistic’ form, expressed as logical ‘if…then’ conditional sentences. Their scope is broad, including criminal, family, property and commercial law. Modern scholars have responded to the Code with admiration, at its perceived fairness and respect for the rule of law and at the complexity of Old Babylonian society. There has also been much discussion of its influence on Mosaic law, primarily referring to the Torah or the first five books of the Hebrew bible. Despite some uncertainty surrounding these issues, Hammurabi is regarded outside Assyriology as an important figure in the history of law, and the document as a true legal code. The U.S. Capitol has a relief portrait of Hammurabi alongside those of other lawgivers, and there are replicas of the stele in numerous institutions, including the United Nations headquarters in New York City and the Pergamon Museum in Berlin.

Babylonian territory before (red) and after (orange) Hammurabi’s reign.

Hammurabi ruled from 1792 to 1750 BC and he secured Babylonian dominance over the Mesopotamian plain through military prowess, diplomacy, and treachery. When he inherited his father’s throne, Babylon held little local control. The local leader was Rim-Sin of Larsa. Hammurabi waited until Rim-Sin grew old, then conquered his territory in one swift campaign, leaving his organisation intact. Later, Hammurabi betrayed allies in nearby territories in order to gain their control. Hammurabi had an aggressive foreign policy, but his letters suggest he was concerned with the welfare of his many subjects and was interested in law and justice. He commissioned extensive construction works and in his letters he frequently presented himself as his ‘people’s shepherd’. Justice was also a theme of the prologue to his Code. Although Hammurabi’s Code was the first Mesopotamian law collection discovered it was not the first written. Several earlier collections survive. These collections were written in Sumerian and Akkadian, they also purport to have been written by rulers. There were almost certainly more such collections, as statements of other rulers suggesting the custom was widespread and the similarities between these law collections make it tempting to assume a consistent underlying legal system. There are additionally thousands of documents from the practice of law, from before and during the Old Babylonian period. These documents include contracts, judicial rulings, letters on legal cases as well as reform documents. Mesopotamia has the most comprehensive surviving legal corpus from before the Digest of Justinian, even compared to those from Rome and ancient Greece.

The Royal City (left) and Acropolis (right) of Susa in 2007.

The whole Code of Hammurabi is far too long to detail in this blog post. Just the prologue and epilogue together occupy one-fifth of the text! Out of around 4,130 lines, the prologue occupies 300 lines and the epilogue occupies 500. The 300-line prologue begins with an etiology or study of its origination to Hammurabi’s royal authority and in it, Hammurabi lists his achievements and virtues. Unlike the prologue, the 500-line epilogue is explicitly related to the laws and begins with the words “these are the just decisions which Hammurabi has established”. He exalts his laws and his magnanimity, he then expresses a hope that “any wronged man who has a lawsuit may have the laws of the stele read aloud to him and know his rights”. Hammurabi wished for good fortune for any ruler who heeded his pronouncements and respected his stele, however, at the end of the text he invoked the wrath of the gods on any man who disobeyed or erased his pronouncements. The epilogue contained much legal imagery, and the phrase “to prevent the strong from oppressing the weak” is reused from the prologue. However, the king’s main concern appears to be ensuring that his achievements are not forgotten and his name not sullied. The list of curses heaped upon any future defacer is 281 lines long and extremely forceful and some of the curses are very vivid, for example “may the god Sin decree for him a life that is no better than death”; “may he (the future defacer) conclude every day, month, and year of his reign with groaning and mourning” and “may he experience the spilling of his life force like water”. Hammurabi implored a variety of gods individually to turn their particular attributes against the defacer. For example: “may the Storm God deprive him of the benefits of rain from heaven and flood from the springs” and “may the God of Wisdom deprive him of all understanding and wisdom and lead him into confusion”. Time passed and the essential structure of international law was mapped out during the European Renaissance period, though its origins lay deep in history and can be traced to cooperative agreements between peoples in the ancient Middle East. Many of the concepts that today underpin the international legal order were established during the Roman Empire and the ‘Law of Nations’, for example, was invented by the Romans to govern the status of foreigners and the relations between foreigners and Roman citizens. In accord with the Greek concept of natural law, which they adopted, the Romans conceived the law of nations as having universal application. In the Middle Ages, the concept of natural law, along with religious principles through the writings of Jewish philosophers and theologians, became the intellectual foundation of the new discipline of the law of nations, regarded as that part of natural law that applied to the relations between sovereign states. After the collapse of the western Roman Empire in the 5th century, Europe suffered from frequent warring for nearly 500 years. Eventually, a group of nation states emerged and a number of sets of rules were developed to govern international relations. In the 15th century the arrival of Greek scholars in Europe from the collapsing Byzantine Empire and the introduction of the printing press spurred the development of scientific, humanistic, and individualist thought, whilst the expansion of ocean navigation by European explorers spread European norms throughout the world and broadened the intellectual and geographic horizons of western Europe. The subsequent consolidation of European states with increasing wealth and ambitions, coupled with the growth in trade, necessitated the establishment of a set of rules to regulate their relations. In the 16th century the concept of sovereignty provided a basis for the entrenchment of power in the person of the king and was later transformed into a principle of collective sovereignty as the divine right of kings gave way constitutionally to parliamentary or representative forms of government. Sovereignty also acquired an external meaning, referring to independence within a system of competing nation-states. Scholars expanded new writings focussing greater attention on the law of peace and the conduct of international relations than on the law of war, as the focus of this shifted away from the conditions necessary to justify the resort to force in order to deal with increasingly sophisticated relations in areas such as the law of the sea and commercial treaties. Various philosophies grew, bringing with them the acceptance of the concept of natural rights, which played a prominent role in the American and French revolutions and which was becoming a vital element in international politics. In international law, however, the concept of natural rights had only marginal significance until the 20th century. It was only after the two World Wars in the 20th century that brought about the real growth of international organisations, for example the League of Nations, founded in 1919 and the United Nations, founded in 1945. This led to the increasing importance of human rights. Having become geographically international through the colonial expansion of the European powers, international law became truly international in the first decades after World War II, when decolonisation resulted in the establishment of scores of newly independent states. The collapse of the Soviet Union and the end of the Cold War in the early 1990s increased political cooperation between the United States and Russia and their allies across the Northern Hemisphere, but tensions also increased between states of the north and those of the south, especially on issues such as trade, human rights, and the law of the sea. Technology and globalisation, the rapidly escalating growth in the international movement in goods, services, currency, information, and persons, also became significant forces, spurring international cooperation and tending to reduce the ideological barriers that divided the world. However, there are still trade tensions between various countries at various times, for what seem to be at times inexplicable reasons. As I have said before, the one constant in this Universe is that things change!

This week, a familiar phrase…
The phrase “turn a blind eye” often used to refer to a wilful refusal to acknowledge a particular reality and dates back to a legendary chapter in the career of the British naval hero Horatio Nelson. During 1801’s Battle of Copenhagen, Nelson’s ships were pitted against a large Danish-Norwegian fleet. When his more conservative superior officer flagged for him to withdraw, the one-eyed Nelson supposedly brought his telescope to his bad eye and blithely proclaimed, “I really do not see the signal.” He went on to score a decisive victory. Some historians have since dismissed Nelson’s famous quip as merely a battlefield myth, but the phrase “turn a blind eye” persists to this day.

Click: Return to top of page or Index page

The History Of Rail Transport

On 21 February 1804, the world’s first steam-powered railway journey took place when Trevithick’s unnamed steam locomotive hauled a train along the tramway of the Penydarren ironworks, near Merthyr Tydfil in South Wales. But in fact, the history of rail transport began in the prehistoric times. It can be divided into several discrete periods as defined by the principal means of track material and motive power used. The Post Track, a prehistoric causeway in the valley of the River Brue in the Somerset Levels is one of the oldest known constructed trackways and dates from around 3838BC, making it some 30 years older than the Sweet Track from the same area. Various sections have actually been scheduled as ancient monuments. Evidence indicates that there was a 6 to 8.5km long Diolkos paved trackway, which transported boats across the Isthmus of Corinth in Greece from around 600 BC. Wheeled vehicles pulled by men and animals ran in grooves in limestone, which provided the track element, preventing the wagons from leaving the intended route. The Diolkos was in use for over 650 years, until at least the 1st century AD. Paved trackways were also later built in Roman Egypt. In China, a railway has been discovered in the South-West Henan province near Nanyang city. It was carbon dated to be about 2,200 years old from the Qin dynasty. The rails were made from hard wood and treated against corrosion, whilst the sleepers or railway ties were made from wood that was not treated and have therefore rotted. Qin railway sleepers were designed to allow horses to gallop through to the next rail station where they would be swapped for a fresh horse. The railway is theorised to have been used for transportation of goods to front line troops and to fix the Great Wall.

The Reisszug, as it appears today.

The oldest operational railway is the Reisszug, a funicular railway at the Hohensalzburg Fortress in Austria and is believed to date back to either 1495 or 1504AD. Cardinal Matthäus Lang wrote a description of it back in 1515 detailing that it was a cable-type which connected points along a railway track laid on a steep slope. The system is characterised by two counterbalanced carriages that are permanently attached to opposite ends of a haulage cable, which is looped over a pulley at the upper end of the track. The result of such a configuration is that the two carriages move synchronously so as one ascends, the other descends at an equal speed. This feature distinguishes funiculars from inclined elevators, which have a single car that is hauled uphill. The line originally used wooden rails with a hemp haulage rope and was operated by human or animal power, through a treadwheel. The line still exists and remains operational, although in updated form.

A mining cart, shown in De Re Metallica (1556).

Wagonways, otherwise called tramways using wooden rails and horse-drawn traffic, are known to have been used in the 1550s to facilitate transportation of ore tubs to and from mines. They soon became popular in Europe and an example of their operation is shown in an illustration by Georgius Agricola. This line used ‘Hunde’ carts with un-flanged wheels running on wooden planks with a vertical pin on the truck fitting into the gap between the planks to keep it going the right way. The miners called the wagons ‘Hunde’, or ‘dogs’ from the noise they made on the tracks. There are many references to wagonways in central Europe in the 16th century and these were introduced to England by German miners, the first being at Caldbeck, Cumbria quite possibly in the 1560s. A wagonway was built at Prescot near Liverpool some time around 1600, possibly even as early as 1594. Owned by Philip Layton, the line carried coal from a pit near Prescot Hall to a terminus about half a mile away. A funicular railway was made at Brosely in Shropshire some time before 1604 and this carried coal for James Clifford from his mines down to the River Severn, to be loaded onto barges and carried to riverside towns. The Wollaton Wagonway was completed in 1604 by Huntingdon Beaumont (c.1560–1624) who was an English coal mining entrepreneur who built two of the earliest wagonways in England for trans-shipment of coal. However, he was less successful as a businessman and died having been imprisoned for debt. The youngest of four sons, he was born to Sir Nicholas Beaumont and his wife Ann Saunders. They were an aristocratic family in the East Midlands and there were several branches to the Beaumont dynasty. This one was based at Coleorton, Leicestershire, approximately 2 miles (3.2km) east of Ashby de la Zouch. Beaumont was therefore of gentleman status in the formal Elizabethan sense, the family owned coal bearing lands and worked them. He was involved in this coal working and eventually he began working in his own right in the Nottingham area. During 1603 and 1604, during his partnership with Sir Percival Willoughby who was Lord of the Wollaton Manor, Beaumont constructed the wagonway which ran from Strelley, where Beaumont held mining leases, to Wollaton Lane. Beaumont was a successful coal prospector and an innovator in the development of mining techniques and a key innovation attributed to him is the introduction of boring rods to assist in finding coal without sinking a shaft. His working life covered involvement in coal mining activities in Warwickshire, Leicestershire, Nottinghamshire and Northumberland. His coal mining and wagonway activities in the early 1600s near Blyth in Northumberland were, like most of his ventures, unprofitable but the boring rod and wagonway technology he took with him was implemented by others to significant effect. And the wagonway chain he started in the English north east was to later influence George Stephenson. In fact a major coal seam in the region was named the Beaumont Seam, commemorating his engineering efforts there. However, Beaumont lost considerable sums of money borrowed from friends and family. He died in Nottingham Gaol in 1624 having been imprisoned for debt. The Middleton railway in Leeds, which was built in 1758, later became the world’s oldest operational railway (other than funiculars), albeit now in an upgraded form whilst in 1764, the first railway in America was built in Lewiston, New York.

The introduction of steam engines for powering air to blast furnaces led to a large increase in British iron production after the mid 1750s. In the late 1760s, the Coalbrookdale, a village in the Ironbridge Gorge in Shropshire was a settlement of great significance in the history of iron ore smelting as this is where iron ore was first smelted by Abraham Darby (14 April 1677 – 5 May 1717). He was the first and best known of several men of that name and was born into an English Quaker family that played an important role in the Industrial Revolution. Darby developed a method of producing pig iron in a blast furnace fuelled by coking coal rather than charcoal and this was a major step forward in the production of iron as a raw material. This coal was drawn from drift mines in the sides of the valley and as it contained far fewer impurities than normal coal, the iron it produced was of a superior quality. Along with many other industrial developments that were going on in other parts of the country, this discovery was a major factor in the growing industrialisation of Britain. The Coalbrookdale Company began to fix plates of cast iron to the upper surface of wooden rails, which increased their durability and load-bearing ability. At first only ‘balloon loops’, or turning loops could be used for turning wagons, but later, movable points were introduced that allowed for passing loops to be created. A system was introduced in which un-flanged wheels ran on L-shaped metal plates. It is said that a Sheffield colliery manager invented this flanged rail in 1787, though the exact date of this is disputed. The plate rail was taken up by a Benjamin Outram for wagonways serving his canals, manufacturing them at his Butterley ironworks and in 1803, a William Jessop opened the Surrey Iron Railway. This was a double track plateway, sometimes erroneously cited as world’s first public railway, in south London. By 1789 he had introduced a form of all-iron edge rail and flanged wheels for an extension to the Charnwood Forest Canal at Nanpantan, Leicestershire. Then in 1790, Jessop and his partner Outram began to manufacture edge-rails. The first public edgeway built was the Lake Lock Rail Road in 1796 as although the primary purpose of the line was to carry coal, it also carried passengers. These two systems of constructing iron railways, the “L” plate-rail and the smooth edge-rail, continued to exist side by side into the early 19th century but the flanged wheel and edge-rail eventually proved its superiority and became the standard for railways. Cast iron was not a satisfactory material for rails because it was brittle and broke under heavy loads, however the wrought iron rail, invented by John Birkinshaw in 1820, solved these problems. Wrought iron, usually referred to simply as ‘iron was a ductile material that could undergo considerable deformation before breaking, thus making it more suitable for iron rails. But this iron was expensive to produce until a Henry Cort patented the ‘puddling process’ in 1784. He had also patented the rolling process, which was fifteen times faster at consolidating and shaping iron than hammering. These processes greatly lowered the cost of producing iron and iron rails. The next important development in iron production was the ‘hot blast’ process, developed by a James Neilson and patented in 1828, which considerably reduced the amount of coke fuel or charcoal needed to produce pig iron. However, the wrought iron was a soft material that contained slag or ‘dross and this tended to make iron rails distort and delaminate so they typically lasted less than 10 years in use, and sometimes as little as one year under high traffic. All these developments in the production of iron eventually led to replacement of composite wood/iron rails with superior all-iron rails. The introduction of the Bessemer process created the first inexpensive process on an industrial scale for the mass production of steel from molten pig iron before the development of the open hearth furnace. The key principle is in the removal of impurities from the iron by oxidisation, with air being blown through the molten iron. The oxidation also raises the temperature of the iron mass and keeps it molten. This enabled steel to be made relatively inexpensively and led to the era of great expansion of railways that began in the late 1860s. Steel rails lasted several times longer than iron, they also made heavier locomotives possible, thus allowing for longer trains and improving the productivity of railways. The quality of steel had been improved by the end of 19th century, further reducing costs and as a result, steel completely replaced the use of iron in rails, becoming standard for all railways. In 1769 James Watt, a Scottish inventor and mechanical engineer, greatly improved the steam engine of Thomas Newcomen which had been used to pump water out of mines. Watt developed a reciprocating engine capable of powering a wheel. Although the Watt engine powered cotton mills and a variety of machinery, it was a large stationary engine which could not be used otherwise as the state of boiler technology necessitated the use of low pressure steam acting upon a vacuum in the cylinder and this required a separate condenser with an air pump. Nevertheless, as the construction of boilers improved, Watt investigated the use of high-pressure steam acting directly upon a piston. This raised the possibility of a smaller engine that might then be used to power a vehicle and in 1784 he patented a design for a steam locomotive. His employee, William Murdoch, produced a working model of a self-propelled steam carriage in that year.

A replica of Trevithick’s engine at the National Waterfront Museum, Swansea.

The first full-scale working railway steam locomotive was built in the United Kingdom in 1804 by Richard Trevithick, a British engineer born in Cornwall. This engine used high-pressure steam to drive the engine by one power stroke, whilst the transmission system employed a large flywheel to even out the action of the piston rod. On 21 February 1804, the world’s first steam-powered railway journey took place when Trevithick’s unnamed steam locomotive hauled a train along the tramway of the Penydarren ironworks near Merthyr Tydfil, South Wales. Trevithick later demonstrated a locomotive operating upon a piece of circular rail track in Bloomsbury, London but he never got beyond the experimental stage with railway locomotives, not least because his engines were too heavy for the cast-iron plateway track which was then in use.

The ‘Locomotion’ at Darlington Railway Centre and Museum.

Inspired by earlier locomotives, in 1814 George Stephenson persuaded the manager of the Killingworth colliery where he worked to allow him to build a steam-powered machine. Stephenson played a pivotal role in the development and widespread adoption of the steam locomotive as his designs considerably improved on the work of the earlier pioneers. In 1829 he built the locomotive ‘Rocket’, which entered in and won the Rainfall `trials and this success led to Stephenson establishing his company as the pre-eminent builder of steam locomotives for railways in Great Britain and Ireland, the United States, and much of Europe. Steam power continued to be the dominant power system in railways around the world for more than a century. Since then, manufacturers in this world have developed diesel and electric trains, combining them into more power. We have made high-speed trains and it really is amazing to see the differences which have occurred in such a relatively short space of time!

This week…
There is a well-known phrase “fine words butter no parsnips”. This proverbial phrase dates from the 17th century and expresses the notion that fine words count for nothing, whilst action means more than flattery or promises. These days we aren’t very likely to come across the phrase in modern street slang and it is more likely to be heard in a period costume drama. But the phrase comes from a time before potatoes were imported into Britain from America by John Hawkins in the mid 16th century and became a staple in what established itself as the national dish of meat and two veg. Before that, various root vegetables were eaten instead, often mashed and, as anyone who has eaten mashed swedes, turnips or parsnips can testify, they cry out to be ‘buttered-up’ – another term for flattery. It has even been said that we were known for our habit of layering on butter to all manner of foods, much to the disgust of the French, who used it as evidence of the English lack of expertise regarding cuisine!

Click: Return to top of page or Index page

Some 20th Century Changes

After my research for last week’s blog post on workhouses, I felt that there had to be a bit more to the story, so I continued looking and this led on to law in general. Now that is an absolutely huge subject that I cannot possibly hope to encompass in my blogs, but I can perhaps highlight a few things we have either forgotten or simply were never told about, as the law can be a fascinating insight into the priorities of a particular period in time and quite rightly it is constantly changing. For example, in 1313 MPs were banned from wearing armour or carrying weapons in Parliament, a law which still stands today. Others, such as the monarch’s guards, are still permitted to carry weapons, just not MPs themselves. It’s easy enough to see the chain of events that prompts laws to be written or changed. For example, since around 2013, when driving along a three-lane motorway, rule 264 of the Highway Code states that “You should always drive in the left-hand lane when the road ahead is clear. If you are overtaking a number of slow-moving vehicles, you should return to the left-hand lane as soon as you are safely past.” Middle-lane hogging is when vehicles remain in the middle lane longer than necessary, even when there aren’t any vehicles in the inside lane to overtake. So in a hundred years’ time, this law might be seen as a historical curiosity, although it was heartily welcomed by many drivers. So for rather obvious reasons, some laws are still standing whilst others have been dropped as they are inappropriate in the modern day. But go back to the late 19th century and there were the Locomotive Acts, or Red Flag Acts which were a series of Acts of Parliament which regulated the use of mechanically propelled vehicles on British public highways. The first three, the Locomotives on Highways Act 1861, The Locomotive Act 1865 and Highways and Locomotives (Amendment) Act 1878, contained restrictive measures on the manning and speed of operation of road vehicles. They also formalised many important road concepts like vehicle registration, registration plates, speed limits, maximum vehicle weight over structures such as bridges, and the organisation of highway authorities. The most draconian restrictions and speed limits were imposed by the 1865 Act, also known as the ‘Red Flag’ Act, which required “all road locomotives, including automobiles, to travel at a maximum of 4mph (6.4km/h) in the country and 2mph (3.2km/h) in the city, as well as requiring a man carrying a red flag to walk in front of road vehicles hauling multiple wagons”. However The 1896 Act removed some restrictions of the 1865 Act and also raised the speed to 14mph (23km/h). But first, let us go back to earlier times. For example, the First Act of Supremacy 1534. Over the course of the 1520s and 1530s, Henry VIII passed a series of laws that changed life in England entirely, and the most significant of these was this First Act of Supremacy which declared that Henry VIII was the Supreme Head of the Church of England instead of the Pope, effectively severing the link between the Church of England and the Roman Catholic Church, and providing the cornerstone for the English Reformation. This change was so far-ranging that it is difficult to cover every effect that it had. It meant that England (and ultimately, Britain) would be a Protestant country rather than a Catholic one, with consequences for her allies and her sense of connection to the other countries of Europe. It gave Henry VIII additional licence to continue plundering and shutting down monasteries, which had been huge centres of power in England, with really significant consequences in that their role in alleviating poverty, and providing healthcare and education was lost. It led to centuries of internal and external conflict between the Church of England and other faiths, some of which are still ongoing today. In fact, until 1707, there was no such thing as the United Kingdom. There was England, and there was Scotland, two countries which had shared a monarch since 1603 but which were otherwise legally separate. But by 1707, the situation was becoming increasingly difficult, and union seemed to solve both sides’ fears that they were dangerously exposed to exploitation by the other. England and Scotland had been at each other’s throats since a time before the nations of ‘England’ and ‘Scotland’ even formally existed. The Acts of Union did not bring that to an end right away, but ultimately these ancient enemies became one of the most enduring political unions that has ever existed. That isn’t to say it has gone entirely uncontested as in 2014, a referendum was held on Scottish independence where 55% of voters opted to remain in the union. We should also recall 1807, when the Slave Trade Act was introduced. In fact Britain had played a pivotal role in the international slave trade, though slavery had been illegal in Britain itself since 1102 but with the establishment of British colonies overseas, slaves were used as agricultural labour across the British empire. It was estimated that British ships carried more than three million slaves from Africa to the Americas, second only to the five million slaves which were transported by the Portuguese. The Quakers, or the Religious Society of Friends to give them their proper name, were a nonviolent, pacifist religious movement founded in the mid-17th century who were opposed to slavery from the start of their movement. They pioneered the Abolitionist movement, despite being a marginalised group in their own right. As non-Anglicans, they were not permitted to stand for Parliament. They founded a group to bring non-Quakers on board so as to have greater political influence, as well as working to raise public awareness of the horrors of the slave trade. This was achieved through the publication of books and pamphlets. The effect of the Slave Trade Act, once passed, was rapid. The Royal Navy, which was the leading power at sea at the time, patrolled the coast of West Africa and between 1808 and 1860 freed 150,000 captured slaves. Finally, in 1833, slavery was finally banned throughout the British Empire. In the first few decades of the Industrial Revolution, conditions in British factories were frequently abysmal. 15-hour working days were usual and this included weekends. Apprentices were not supposed to work for more than 12 hours a day, and factory owners were not supposed to employ children under the age of 9, but a parent’s word was considered sufficient to prove a child’s age and even these paltry rules were seldom enforced. Yet the wages that factories offered were still so much better than those available in agricultural labour that there was no shortage of workers willing to put up with these miserable conditions, at least until they had earned enough money to seek out an alternative. It was a similar social movement to the one that had brought an end to slavery that fought child labour in factories, it was also believed that reducing working hours for children would lead to a knock-on effect where working hours for adults would also be reduced. The Factory Act of 1833, among a host of changes, banned children under 9 from working in textile mills, banned children under 18 from working at night, and children between 9 and 13 were not permitted to work unless they had a schoolmaster’s certificate showing they had received two hours’ education per day. So the Factory Act not only improved factory conditions, but also began to pave the way towards education for all. Then, just two years later, a law against cruelty to animals followed. Until 1835, there had been no laws in Britain to prevent cruelty to animals, except one in 1822, which exclusively concerned cattle. Animals were property, and could be treated in whatever way the property-owner wished. It was actually back in 1824 that a group of reformers founded the Society for the Prevention of Cruelty to Animals, which we know today as the RSPCA. Several of those reformers had also been involved in the abolition of the slave trade, such as the MP William Wilberforce. Their initial focus was on working animals such as pit ponies, which worked in mines, but that soon expanded. The Cruelty to Animals Act of 1835, for which the charity’s members lobbied, outlawed bear-baiting and cockfighting, as well as paving the way for further legislation for things such as creating veterinary hospitals, and improving how animals were transported.

The RSPCA began by championing the rights of the humble pit pony.

Prior to the Married Women’s Property Act 1870, when a woman married a man, she ceased to exist as a separate legal being. All of her property prior to marriage, whether accumulated through wages, inheritance, gifts or anything else became his, and any property she came to possess during marriage was entirely under his control, not hers. There were a handful of exceptions, such as money held in trust, but this option was out of reach of all but the very wealthy. Given the difficulty of seeking a divorce at this time, this effectively meant that a man could do whatever he wished with his wife’s money, including leaving her destitute, and she would have very little legal recourse. But the Act changed this. It gave a woman the right to control money she earned whilst married, as well as keeping inherited property, and made both spouses liable to maintain their children from their separate property, something that was important in relation to custody rights on divorce. The Act was not retrospective, so women who had married and whose property had come into the ownership of their husbands were not given it back, which limited its immediate effect. But ultimately, it was a key stage on the long road to equality between men and women in Britain. It is clear that 1870 was a big year in British politics so fas as education was concerned. This is because before then, the government had provided some funding for schools but this was piecemeal and there were plenty of areas where there were simply no school places to be found. This was complicated by the fact that many schools were run by religious denominations, as there was conflict over whether the government should fund schools run by particular religious groups. As has been seen, under the Factory Act 1833 there were some requirements that children should be educated, but these were frequently ignored. Previously, industrialists had seen education as undesirable, at least when focusing on their ‘bottom line’, as hours when children were in education represented hours when they were not able to work. There were some factory jobs that only children could perform, for instance because of their size. But as automation advanced, it increasingly became the case that a lack of educated workers was holding back industrial production so industrialists became a driving force in pushing through comprehensive education. The Education Act of 1870 didn’t provide free education for all, but it did ensure that schools would be built and funded wherever they were needed, so that no child would miss out on an education simply because they didn’t live near a school. We take it for granted now, but free education for all was not achieved until 1944. At the end of the First World War, the Representation of the People Act 1918 is chiefly remembered as the act that gave women the right to vote, but in fact it went further than that. Only 60% of men in Britain had the right to vote prior to 1918, as voting rights were restricted to men who owned a certain amount of property. Elections had been postponed until the end of the First World War and now, in an atmosphere of revolution, Britain was facing millions of soldiers who had fought for their country returning home and being unable to vote. This was clearly unacceptable. As a result, the law was changed so that men aged over 21, or men who had turned 19 whilst fighting in the First World War, were given the vote. But it was also evident that women had contributed hugely to the war effort, and so they too were given the vote under restricted circumstances. The vote was granted to women over 30 who owned property, were graduates voting in a university constituency or who were either a member or married to a member of the Local Government Register. The belief was that this set of limitations would mean that mostly married women would be voting, and therefore that they would mostly vote the same way as their husbands, so it wouldn’t make too much difference. Women were only granted equal suffrage with men in 1928. Then in 1946 came the National Health Service Act. I personally think that we should be proud of our free health service, especially after I learned what residents of some other countries have to do in order to obtain medical care. In 1942, economist William Beveridge had published a report on how to defeat the five great evils of society, these being squalor, ignorance, want, idleness, and disease. Ignorance, for instance, was to be defeated through the 1944 Education Act, which made education free for all children up to the age of 15. But arguably the most revolutionary outcome of the Beveridge Report was his recommendation to defeat disease through the creation of the National Health Service. This was the principle that healthcare should be free at the point of service, paid for by a system of National Insurance so that everyone paid according to what they could afford. One of the principles behind this was that if healthcare were free, people would take better care of their health, thereby improving the health of the country overall. Or to put it another way, someone with an infectious disease would get it treated for free and then get back to work, rather than hoping it would go away, infecting others and leading to lots of lost working hours. It is an idea that was, and remains, hugely popular with the public.


As I said last week, there had been several new laws with the gradual closure of workhouses and by the beginning of the 20th century some infirmaries were even able to operate as private hospitals. A Royal Commission of 1905 reported that workhouses were unsuited to deal with the different categories of resident they had traditionally housed, and it was recommended that specialised institutions for each class of pauper should be established in which they could be treated appropriately by properly trained staff. The ‘deterrent’ workhouses were in future to be reserved for those considered as incorrigibles, such as drunkards, idlers and tramps. In Britain during the early 1900’s, average life span as considered as about 47 for a man and 50 for a woman. By the end of the century, it was about 75 and 80. Life was also greatly improved by new inventions. In fact, even during the depression of the 1930s things improved for most of the people who had a job. Of course we then had the First World War, where so many people lost their lives. So far as the United Kingdom and the Colonies are concerned, during that war there were about 888,000 military deaths (from all causes) and just about 17,000 civilian deaths due to military action and crimes against humanity. There were also around 1,675,000 military wounded. Then during the Second World War, again just in the United Kingdom (including Crown Colonies) there were almost 384,000 military deaths (from all causes), some 67,200 civilian deaths due to military action and crimes against humanity as well as almost 376,000 military wounded. On 24 January 1918 it was reported in the Daily Telegraph that the Local Government Committee on the Poor Law had presented to the Ministry of Reconstruction a report recommending abolition of the workhouses and transferring their duties to other organisations. That same year, free primary education for all children was provided in the UK. Then a few years later the Local Government Act of 1929 gave local authorities the power to take over workhouse infirmaries as municipal hospitals, although outside London few did so. The workhouse system was officially abolished in the UK by the same Act on 1 April 1930, but many workhouses, renamed Public Assistance Institutions, continued under the control of local county councils. At the outbreak of the Second World War in 1939 almost 100,000 people were accommodated in the former workhouses, 5,629 of whom were children. Then the 1948 National Assistance Act abolished the last vestiges of the Poor Law, and with it the workhouses. Many of the buildings were converted into retirement homes run by the local authorities, so by 1960 slightly more than half of local authority accommodation for the elderly was provided in former workhouses. Under the Local Government Act 1929, the boards of guardians, who had been the authorities for poor relief since the Poor Law Amendment Act 1834, were abolished and their powers transferred to county and county borough councils. The basic responsibilities of the statutory public assistance committees set up under the Act included the provision of both ‘indoor’ and ‘outdoor’ relief. Those unable to work on account of age or infirmity were housed in Public Assistance (formerly Poor Law) Institutions and provided with the necessary medical attention, the committee being empowered to act, in respect of the sick poor, under the terms of the Mental Deficiency Acts 1913-27, the Maternity and Child Welfare Act 1918 and the Blind Persons Act 1920, in a separate capacity from other county council committees set up under those Acts. Outdoor relief for the able-bodied unemployed took the form of ‘transitional payments’ by the Treasury, which were not conditional on previous national insurance contributions, but subject to assessment of need by the Public Assistance Committee. The Unemployment Act 1934 transferred the responsibility for ‘transitional payments’ to a national Unemployment Assistance Board (re-named ‘Assistance Board’ when its scope was widened under the Old Age Pensions and Widows Pensions Act, 1940). Payment was still dependent on a ‘means test’ conducted by visiting government officials and, at the request of the government, East Sussex County Council, in common with other rural counties, agreed that officers of its Public Assistance Department should act in this capacity for the administrative county, excepting the Borough of Hove, for a period of eighteen months after the Act came into effect. Other duties of the Public Assistance Committee included the apprenticing and boarding-out of children under its care, arranging for the emigration of suitable persons, and maintaining a register of all persons in receipt of relief. Under the National Health Service Act 1946, Public Assistance hospitals were then transferred to the new regional hospital boards, and certain personal health services to the new Health Committee. The National Insurance Act 1946 introduced a new system of contributory unemployment insurance, national health insurance and contributory pension schemes, under the control of the Ministry of Pensions and National Insurance. Payment of ‘supplementary benefits’ to those not adequately covered by the National Insurance Scheme was made the responsibility of the National Assistance Board under the National Assistance Act 1948 and thus the old Poor Law concept of relief was finally superseded. Under the same Act, responsibility for the residential care of the aged and infirm was laid upon a new statutory committee of the county council, the Welfare Services Committee and the Public Assistance Committee was dissolved. Our world is constantly changing!

This week, an amusing image for a change…

Invisible tape.

Click: Return to top of page or Index page