The Sun

The Sun is the at the centre of the Solar System and is a massive, hot ball of plasma, inflated and heated by nuclear fusion reactions at its core. Part of this internal energy is emitted from the Sun’s surface as light, ultraviolet and infrared radiation, providing most of the energy for life on Earth. The Sun’s radius is about 695,000 kilometres (432,000 miles), or 109 times that of Earth. Its mass is about 330,000 times that of Earth, making up about 99.86% of the total mass of the Solar System. Roughly three-quarters of the Sun’s mass consists of hydrogen (73%); the rest is mostly helium (25%), with much smaller quantities of heavier elements, including oxygen, carbon, neon and iron. The Sun is classed as a G-type main-sequence star, informally called a yellow dwarf, though its light is actually white. It formed approximately 4.6 billion years ago from the gravitational collapse of matter within a region of a large molecular cloud. Most of this matter gathered in the centre, whereas the rest flattened into an orbiting disk that became the Solar System. The central mass became so hot and dense that it eventually initiated nuclear fusion in its core. It is thought that almost all stars form by this process. Every second, the Sun’s core fuses about 600 million tons of hydrogen into helium, and in the process converts 4 million tons of matter into energy. This energy, which can take between 10,000 and 170,000 years to escape the core, is the source of the Sun’s light and heat. Far in the future, when hydrogen fusion in the Sun’s core diminishes to the point where the Sun is no longer in hydrostatic equilibrium, its core will undergo a marked increase in density and temperature which will push its outer layers to expand, eventually transforming the Sun into a red giant. This process will make the Sun large enough to render Earth uninhabitable approximately five billion years from the present. After this, the Sun will shed its outer layers and become a dense type of cooling star known as a white dwarf, and no longer produce energy by fusion, but still glow and give off heat from its previous fusion. So we have a while yet to live here! The enormous effect of the Sun on Earth has been recognised since prehistoric times and the Sun was thought of by some cultures as a deity. The synodic rotation of Earth and its orbit around the Sun are the basis of some solar calendars and the predominant calendar in use today is the Gregorian, which is based upon the standard sixteenth-century interpretation of the Sun’s observed movement as actual movement.

The English word ‘Sun’ developed from Old English ‘sunne’, though it appears in other Germanic languages. This is ultimately related to the word for ‘sun’ in other branches of the Indo-European language family, for example Latin (sōl), ancient Greek, (hēlios) and Welsh (haul), as well as Sanskrit, Persian and others. The principal adjectives for the Sun in English are ‘sunny’ for sunlight and, in technical contexts, ‘solar’ from Latin sol’, the latter found in terms such as solar day, solar eclipse and Solar System. In science fiction, Sol may be used as a name for the Sun to distinguish it from other stars. The English weekday name ‘Sunday’ stems from Old English ‘Sunnandæg’ or “sun’s day”, a Germanic interpretation of the Latin phrase ‘diēs sōlis’, itself a translation of the ancient Greek ‘hēmera hēliou’, or ‘day of the sun’. The Sun makes up about 99.86% of the mass of the Solar System. The Sun has an absolute magnitude of +4.83, estimated to be brighter than about 85% of the stars in the Milky Way, most of which are red dwarfs. The Sun is regarded as a heavy-element-rich, star. The formation of the Sun may have been triggered by shockwaves from one or more nearby supernovae and this is suggested by a high abundance of heavy elements in the Solar System, such as gold and uranium, relative to the abundances of these elements in poorer heavy-element stars. These heavy elements could most plausibly have been produced by nuclear reactions during a supernova, or by transmutation through neutron absorption within a massive, second-generation star. The Sun is by far the brightest object in the Earth’s sky and is about thirteen billion times brighter than the next brightest star, Sirius. At its average distance, light travels from the Sun’s horizon to Earth’s horizon in about eight minutes and twenty seconds, whilst light from the closest points of the Sun and Earth takes about two seconds less. The energy of this sunlight supports almost all life on Earth by photosynthesis and drives both Earth’s climate and weather. The Sun does not have a definite boundary, but its density decreases exponentially with increasing height above the photosphere. For the purpose of measurement, the Sun’s radius is considered to be the distance from its centre to the edge of the photosphere, the apparent visible surface of the Sun and by this measure, the Sun is a near-perfect sphere with an oblateness estimated at nine millionths, which means that its polar diameter differs from its equatorial diameter by only 6.2 miles (10 kilometres). The tidal effect of the planets is weak and does not significantly affect the shape of the Sun. The Sun’s original chemical composition was inherited from the interstellar medium from which it formed. Originally it would have contained about 71.1% hydrogen, 27.4% helium, and 1.5% heavier elements. The hydrogen and most of the helium in the Sun would have been produced by nucleosynthesis in the first twenty minutes of the universe, and the heavier elements were produced by previous generations of stars before the Sun was formed, and spread into the interstellar medium during the final stages of stellar life as well as by events such as supernovae. Since the Sun formed, the main fusion process has involved fusing hydrogen into helium and over the past 4.6 billion years, the amount of helium and its location within the Sun has gradually changed. Within the core, the proportion of helium has increased from about 24% to about 60% due to fusion, and some of the helium and heavy elements have settled from the photosphere towards the centre of the Sun because of gravity, but the proportions of heavier elements are unchanged. Heat is transferred outward from the Sun’s core by radiation rather than by convection, so the fusion products are not lifted outward by heat; they remain in the core and gradually an inner core of helium has begun to form that cannot be fused because presently the Sun’s core is not hot or dense enough to fuse helium. In the current photosphere, the helium fraction is reduced, and the metallicity is only 84% of what it was in the ‘protostellar’ phase, that is before nuclear fusion in the core started. In the future, helium will continue to accumulate in the core, and in about 5 billion years this gradual build-up will eventually become a red giant.

An illustration of the Sun’s structure, shown here in false colour to provide contrast.

The visible surface of the Sun, the photosphere, is the layer below which the Sun becomes opaque to visible light. Photons produced in this layer escape the Sun through the transparent solar atmosphere above it and become solar radiation, sunlight. The change in opacity is due to the decreasing amount of ‘H−ions’, which absorb visible light easily. Conversely, the visible light we see is produced as electrons react with hydrogen atoms to produce H-ions. The photosphere is up to hundreds of kilometres thick, and is slightly less opaque than air on Earth. Because the upper part of the photosphere is cooler than the lower part, an image of the Sun appears brighter in the centre than on the edge or limb of the solar disk, in a phenomenon known as limb darkening. During early studies of the optical spectrum of the photosphere, some absorption lines were found that did not correspond to any chemical elements then known on Earth. In 1868, Norman Lockyer hypothesised that these absorption lines were caused by a new element that he dubbed helium, after the Greek Sun god Helios. Twenty-five years later, helium was isolated on Earth. The Sun’s atmosphere is composed of four parts, these being the photosphere (visible under normal conditions), the chromosphere, the transition region, the corona and the heliosphere. During a total solar eclipse, the photosphere is blocked, making the corona visible. The coolest layer of the Sun is a temperature minimum region extending to about 500 km above the photosphere, and has a temperature of about 4,100K. This part of the Sun is cool enough to allow for the existence of simple molecules such as carbon monoxide and water, which can be detected via their absorption spectra. The chromosphere, transition region, and corona are much hotter than the surface of the Sun.

The Sun’s Transition region, taken by Hinode’s Solar Optical Telescope.

Above the temperature minimum layer is a layer about 2,000 km thick, dominated by a spectrum of emission and absorption lines. It is called the chromosphere from the Greek root ‘chroma’, meaning colour, because the chromosphere is visible as a coloured flash at the beginning and end of total solar eclipses. The temperature of the chromosphere increases gradually with altitude, ranging up to around 20,000K near the top. In the upper part of the chromosphere helium becomes partially ionised. Above the chromosphere, in a thin (about 200 km) transition region, the temperature rises rapidly from around 20,000K in the upper chromosphere to coronal temperatures closer to 1,000,000K. The temperature increase is facilitated by the full ionisation of helium in the transition region, which significantly reduces radiative cooling of the plasma. The transition region does not occur at a well-defined altitude, rather it forms a kind of ‘nimbus’ around chromospheric features such as spicules and filaments and is in constant, chaotic motion. The transition region is not easily visible from Earth’s surface, but is readily observable from space by instruments sensitive to the ultraviolet portion of the spectrum.

A total solar eclipse seen on Monday, August 21, 2017 above Madras, Oregon. Photo Credit: (NASA/Aubrey Gemignani)

The corona is the next layer of the Sun and the average temperature of this corona and solar wind is about 1,000,000 to 2,000,000K, however in the hottest regions it is 8,000,000 to 20,000,000K. Although no complete theory yet exists to account for the temperature of the corona, at least some of its heat is known to be from ‘magnetic reconnection’, a physical process occurring in electrically conducting plasmas in which the magnetic topology is rearranged and magnetic energy is converted to kinetic energy, thermal energy and particle acceleration. The corona is the extended atmosphere of the Sun, which has a volume much larger than the volume enclosed by the Sun’s photosphere. A flow of plasma outward from the Sun into interplanetary space is the solar wind. Meanwhile the heliosphere, the tenuous outermost atmosphere of the Sun, is filled with solar wind plasma. This outermost layer of the Sun is defined to begin at the distance where the flow of the solar wind becomes ‘superalfvénic’, that is, where the flow becomes faster than the speed of Alfvén waves, at approximately 20 solar radii (0.1 AU). Turbulence and dynamic forces in the heliosphere cannot affect the shape of the solar corona within, because the information can only travel at the speed of Alfvén waves. The solar wind travels outward continuously through the heliosphere, forming the solar magnetic field into a spiral shape until it impacts the ‘heliopause’ more than 50 Astronomical Units (AU) from the Sun. In December 2004, the Voyager 1 probe passed through a shock front that is thought to be part of the heliopause. In late 2012, Voyager 1 recorded a marked increase in cosmic ray collisions and a sharp drop in lower energy particles from the solar wind, which suggested that the probe had passed through the heliopause and entered the interstellar medium and indeed did so August 25, 2012 at approximately 122 AU from the Sun. The heliosphere has a heliotail which stretches out behind it due to the Sun’s movement. The Sun emits light across the visible spectrum, so its colour is white when viewed from space or when the Sun is high in the sky. The Solar radiance per wavelength peaks in the green portion of the spectrum when viewed from space. When the Sun is very low in the sky, atmospheric scattering renders the Sun yellow, red, orange, or magenta, and in rare occasions even green or blue. Despite its typical whiteness, some cultures mentally picture the Sun as yellow and some even red; the reasons for this are cultural and exact ones are the subject of debate. The Sun is classed as a G2V star, with ‘G2’ indicating its surface temperature of approximately 5,778 K (5,505 °C; 9,941 °F), and V that it, like most stars, is a ‘main-sequence’ star.

The Solar Constant is the amount of power that the Sun deposits per unit area that is directly exposed to sunlight. It is equal to approximately 1,368 watts per square metre at a distance of one astronomical unit (AU) from the Sun (that is, on or near Earth). Sunlight on the surface of Earth is attenuated by Earth’s atmosphere, so that less power arrives at the surface in clear conditions when the Sun is near the zenith. Sunlight at the top of Earth’s atmosphere is composed (by total energy) of about 50% infrared light, 40% visible light, and 10% ultraviolet light and the atmosphere in particular filters out over 70% of solar ultraviolet, especially at the shorter wavelengths. Solar ultraviolet radiation ionises Earth’s dayside upper atmosphere, creating the electrically conducting ionosphere. Ultraviolet light from the Sun has antiseptic properties and can be used to sanitise tools and water. It also causes sunburn and has other biological effects such as the production of vitamin D and sun tanning. It is also the main cause of skin cancer. Ultraviolet light is strongly attenuated by Earth’s ozone layer, so that the amount of UV varies greatly with latitude and has been partially responsible for many biological adaptations, including variations in human skin colour in different regions of the Earth.

Once outside the Sun’s surface, neutrinos and photons travel at the speed of light.

The Sun is about half-way through its main-sequence stage, during which nuclear fusion reactions in its core fuse hydrogen into helium. Each second, more than four million tonnes of matter are converted into energy within the Sun’s core, producing neutrinos and solar radiation. At this rate, the Sun has so far converted around 100 times the mass of Earth into energy, about 0.03% of the total mass of the Sun. It is gradually becoming hotter in its core, hotter at the surface, larger in radius, and more luminous during its time on the main sequence. Since the beginning of its main sequence life, it has expanded in radius by 15% and the surface has increased in temperature from 5,620 K (5,350 °C; 9,660 °F) to 5,777 K (5,504 °C; 9,939 °F), resulting in a 48% increase in luminosity from 0.677 solar luminosities to its present-day 1.0 solar luminosity. This occurs because the helium atoms in the core have a higher mean molecular weight than the hydrogen atoms which were fused, resulting in less thermal pressure. The core is therefore shrinking, allowing the outer layers of the Sun to move closer to the centre, releasing gravitational potential energy. It is believed that half of this released gravitational energy goes into heating, which leads to a gradual increase in the rate at which fusion occurs and thus an increase in the luminosity. This process speeds up as the core gradually becomes denser and at present, it is increasing in brightness by about 1% every 100 million years. It will take at least one billion years from now to deplete liquid water from the Earth from such increase. After that, the Earth will cease to be able to support complex, multicellular life and the last remaining multi-cellular organisms on the planet will suffer a final, complete mass extinction. Also, according to the ESA’s Gaia space observatory mission made in 2022, the sun will be at its hottest point at the eight billion year mark, but will spend a total of approximately ten to eleven billion years as a main-sequence star before its Red Giant phase. 

The size of the current Sun now in its ‘main sequence’, compared to its estimated size during its future ‘red giant’ phase.

But the Sun does not have enough mass to explode as a supernova. Instead, when it runs out of hydrogen in the core in approximately five billion years, core hydrogen fusion will stop, and there will be nothing to prevent the core from contracting. The release of gravitational potential energy will cause the luminosity of the Sun to increase, ending the main sequence phase and leading the Sun to expand over the next billion years: first into a subgiant and then into a red giant. The heating due to gravitational contraction will also lead to expansion of the Sun and hydrogen fusion in a shell just outside the core, where unfused hydrogen remains, contributing to the increased luminosity, which will eventually reach more than 1,000 times its present luminosity. When the Sun enters its red-giant branch phase, it will engulf Mercury and most likely Venus, reaching about 0.75 AU (110 million km; 70 million miles). The Sun will spend around a billion years in this phase and lose around a third of its mass. After the red-giant branch, the Sun has approximately 120 million years of active life left, but much happens. First, the core (full of degenerate helium) will ignite violently in the helium flash and it is estimated that 6% of the core, itself 40% of the Sun’s mass, will be converted into carbon within a matter of minutes. The Sun then shrinks to around ten times its current size and fifty times the luminosity, with a temperature a little lower than today. It will then have reached the ‘horizontal branch’, but a star of the Sun’s metallicity does not evolve along the horizontal branch. Instead, it just becomes moderately larger and more luminous over about 100 million years as it continues to react helium in the core. When the helium is exhausted, the Sun will repeat the expansion it followed when the hydrogen in the core was exhausted. This time, however, it all happens faster, and the Sun becomes larger and more luminous, engulfing Venus if it has not already. This is the giant-branch phase, and the Sun is alternately reacting hydrogen in a shell or helium in a deeper shell. After about 20 million years on the early asymptotic giant branch, the Sun becomes increasingly unstable, with rapid mass loss and thermal pulses that increase the size and luminosity for a few hundred years every 100,000 years or so. The thermal pulses become larger each time, with the later pulses pushing the luminosity to as much as 5,000 times the current level and the radius to over 1 AU (150 million km; 93 million miles). According to a 2008 model, Earth’s orbit will have initially expanded to at most 1.5 AU (220 million km; 140 million miles) due to the Sun’s loss of mass as a red giant. However, Earth’s orbit will later start shrinking due to tidal forces and, eventually, drag from the lower chromosphere so that it is engulfed by the Sun during the tip of the red-giant branch phase, some 3.8 and 1 million years after Mercury and Venus have respectively suffered the same fate. Models vary depending on the rate and timing of mass loss. Models that have higher mass loss on the red-giant branch produce smaller, less luminous stars at the tip of the asymptotic giant branch, perhaps only 2,000 times the luminosity and less than 200 times the radius. For the Sun, four thermal pulses are predicted before it completely loses its outer envelope and starts to make a planetary nebula. By the end of that phase, which will last approximately 500,000 years, the Sun will only have about half of its current mass.The post-giant-branch evolution is even faster. The luminosity stays approximately constant as the temperature increases, with the ejected half of the Sun’s mass becoming ionised into a planetary nebula as the exposed core reaches 30,000 K (29,700 °C; 53,500 °F). The final naked core, a white dwarf, will have a temperature of over 100,000 K (100,000 °C; 180,000 °F), and contain an estimated 54.05% of the Sun’s present-day mass. The planetary nebula will disperse in about 10,000 years, but the white dwarf will survive for trillions of years before fading to a hypothetical black dwarf. I think that is enough for now! In the future I plan to write about the Solar System and the planets, which I hope you will find of interest. 

This week…Time

People, especially children, don’t know what they don’t know, so raising awareness is a vital first step in their education process. For example, first becoming aware that ‘time’ exists, leading on to using a clock to measure and display time.

Click: Return to top of page or Index page

Memories Of Cars

A Ford Model Y, 1937.

I was born in London, but as I have previously mentioned in earlier blog posts the bad smog of 1952 caused severe health issues with my Mum which meant we moved to Whittlesey, near Peterborough. It also meant I arrived in this world just a few weeks earlier than was expected. I was less than a year old then, and my memories of life began there. My age was still in single figures when my Dad bought his first car, a 1937 Ford Eight, though some might call it a Model Y. Even now I can still recall walking down the road with him to the little garage with shiny black doors where our car was kept each night, sliding the garage door open and getting that delightful smell of petrol, oil and leather. 

A Ford Popular, c 1960.

But that car, in time, had to be changed and Dad got a green Ford Popular. It wasn’t new, but it was fine for us and I still recall days out along the Wash road to friends in nearby Thorney. But a policeman that we knew there saw it and asked Dad about it, as this same policeman had just got a brand new car and the registration number of his new car wasn’t too far different from Dad’s Ford Popular. It seems Dad’s had been re-registered, for some reason. We think it had been in an accident and rebuilt. But it was fine. Then a few years later Dad saw a Ford Anglia, it was a really lovely turquoise blue with a bright white roof. This made it a deluxe model and it was a special, as it‘d had a bigger engine put in (technically, it was a 1340cc Consul Classic engine with a 3-bearing crankshaft) along with anti-roll bars on the back. Sadly the original owner’s son, who had wanted to use this special car for racing, had been killed in an accident so the owner sold the car.

A Ford Anglia deluxe.

Up to now my Dad had been used to a car with only three forward gears and this one had four, so getting it into reverse took some working out but we managed it! This car went well, but a while later the 1340cc engine needed replacing, so an exchange engine was organised. Ford did exchange engines, so Dad asked to have an Anglia Super, 1200cc engine fitted. But Ford only did exchange engine upgrades, which meant Dad had to have a 1500cc Cortina engine (with a 5-bearing crankshaft) fitted! This made it very much a GT version. It was really lovely, especially as one of my older brothers got to drive it. But after a while my Mum’s back was starting to become a bit painful due to a bad war injury, so Dad sold the Ford Anglia and got a green Austin 1100. Sadly that was a mistake and whilst we were on holiday down in North Devon this Austin 1100 needed a bit of repair. We were friendly with the local garage owner, who, like us, had been born in London so I and my Dad sold petrol etc there at the garage whilst the garage owner fixed the car. A few years later Dad retired from teaching and bought a brand new car, an Austin 1300. It was a really bright yellow colour and when we took ourselves down for a holiday in North Devon, we called in for petrol and the same garage owner took one look and burst out laughing, saying “Who spilt the mustard pot, then?”. We all had a laugh at that.

An Austin 1300.

By now I was able to drive, so Dad let me take Mum shopping and to various places locally, then I was told of a Ford Anglia Super owned by a relative of my sister-in-law. This car needed a bit of work doing to it as the nearside front wing was quite rusty but we agreed a price and I brought it home. With the help of a neighbour I patched up the nearside wing whilst I saved up for a new wing. In truth it was pretty bad, it looked all right as it was held together with chicken wire – it wouldn’t have passed an M.O.T. test! Still, I got it repaired with a brand new wing and then had to carry on saving, as the offside front wing was now going rusty. Meanwhile I drove down to London to visit relatives, but driving round Hyde Park Corner I had a coming together with another car. The other driver didn’t stop, it was just a slight scrape but I was annoyed and showed the uncle I was visiting when I got to their house. He shrugged it off, saying “not to worry it happens all the time here”,  but as you might expect the scrape was on the new wing! My uncle soon got a cloth and cleaned the scrape, all was well. A few years later I sold the car and bought an Austin 1100, but that needed too much work doing so after a while I scrapped that. A little later I was looking around the second-hand cars at a local Ford dealer and I saw a lovely Ford Capri 1600, right at the back of the rows of cars. I asked to see that one and perhaps test-drive it, but the salesman tried to sell me a similar one parked near the front of the others. He was almost too insistent that I try this other one but I stood my ground and after some work the car I wanted was out and did test it. I also got a friend who owned a garage to give that car a once-over, it was fine apart from a couple of minor things and so I bought that lovely Capri 1600.

A Ford Capri 1600.

That did me well for quite a while until sadly, after a long couple of weeks away working, I was only a mile or two away from home when I had an accident. My fault, I was tired and these things happen. The car was scrapped and so a short while later I went back to the Ford dealer to get another car. This same salesman was absolutely delighted to see me, as my old car had been seen and recognised in the scrapyard!

A Ford Escort 1300.

So I tested and bought a Ford Escort which was great. In the meantime I’d had promotion at work and could now afford to buy a house, which I did. My mother had needed bed rest in hospital prior to having a hip replacement and whilst visiting her I had met a nice young lady, in fact I probably spent more time seeing the young lady than I did my mother! I started dating the girl, she had a few difficulties as she was in a wheelchair and couldn’t walk but that wasn’t a problem to me. Sadly it wasn’t meant to be though, as she had other, mental issues which were beyond my help. In the meantime I had changed the Ford Escort for a Ford Fiesta – a bright yellow one – and I was managing. Then at work I was told that in the small print of the form I had signed, there was a clause which meant that I could be demoted within a certain time. That was unexpected, but even with help from the local Union I had to accept it. So I sold the Fiesta and bought a small motorcycle, as I was keeping the house. Later I sold the motorbike, which pleased my dear Mum! I learned some years afterwards that the family had been looking out for a car for me and so I bought a little Austin Mini, which was excellent. I drove that for many miles over the next few years!

An Austin Mini

As happens there were changes made at BT, with reorganisations which enabled me to get a transfer to Leicester because I was technically through a promotions board, awaiting a post. I started driving from Peterborough to Leicester every day, but that out a strain on me and the little car, which led to me meeting, chatting up and later marrying the lovely lady who I had met on the train. We divorced a few years later, but that is life. I bought a house in Leicester and sold the mini. It had done its job. Happily I was able to get a Fiat Panda which, with the help of my eldest brother, kept me on the road as I had now transferred from Leicester to Nottingham. I was living in a house not far from Chesterfield though, which again put a strain on me, my marriage and my car. 

A Fiat Panda.

Further changes in BT led me to work in Sheffield, then down to Birmingham for a few years before returning to Sheffield. Yet more reorganisations meant I was in Manchester as a trainer for a while, before returning to… Leicester! Which is where I finished. I had in the meantime got the Fiat Panda sold, I’d had a Land Rover Series 3, then when my eldest brother retired due to ill health I bought his Land Rover Defender. 

A Land Rover Defender.

My return to Leicester culminated in me changing that for a Land Rover Discovery that had hardly been used and I did quite a few miles in that. It was to me the absolute best of all the cars I had ever had.

A Land Rover Discovery.

I had to finish my driving career though, as sadly ill-health stopped me driving. Still, I had been driving from 1970 to 2015 and it took a bit of getting used to, using public transport. But I was able to get a bus pass which also enabled me free train travel from Leicester to a few places, including Peterborough, which suited me nicely. I sold the Discovery to a local dealer who I believe may have kept it for himself, as it was in such good condition. I missed driving, but I had to accept that if I had an accident I could not just be putting myeline in danger, but others too. Knowing that meant I had absolutely no choice.

This week… a reminder.

Remember that success is a journey, not a destination.
We take care of our bodies to protect us.
We have family to teach us,
We have good friends to steer us,
We have good food to fuel us,
With breaks to maintain and service us.

A History of Tattooing

A tattoo is a form of body modification made by inserting tattoo ink, dyes, and/or pigments, either indelible or temporary, into the dermis layer of the skin to form a design. The art of tattooing has been practiced across the globe since at least Neolithic times, as evidenced by mummified preserved skin, ancient art and the archaeological record. Both ancient art and archaeological finds of possible tattoo tools suggest tattooing was practiced by the Upper Paleolithic period in Europe, however direct evidence for tattooing on mummified human skin extends only to the fourth millennium BC. The oldest discovery of tattooed human skin to date has been found on the body of Ötzi the Iceman, dating to between 3370 and 3100 BC. Other tattooed mummies have been recovered from many archaeological sites, including locations in Greenland, Alaska, Siberia, Mongolia, western China, Egypt, Sudan, the Philippines and the Andes. There are preserved tattoos on ancient mummified human remains which reveal that tattooing has been practiced throughout the world for millennia. In 2015, scientific re-assessment of the age of the two oldest known tattooed mummies identified as the oldest example then known. This body, with sixty-one tattoos, was found embedded in glacial ice in the Alps and was dated to 3250 BC. In 2018 the oldest figurative tattoos in the world were discovered on two mummies from Egypt which are dated between 3351 and 3017 BC.

Hawaiian hafted instrument, mallet and ink bowl. which are the classic instruments of traditional Austronesian tattooing culture.

Ancient tattooing was widely practiced among the Austronesian people and was one of the early technologies developed by the pre-Austronesians in Taiwan and coastal South China prior to at least 1500 BC, before the Austronesian expansion into the islands of the Indo-Pacific. For the most part, Austronesians used characteristic perpendicularly hafted tattooing points that were tapped on the handle with a length of wood to drive the tattooing points into the skin. The handle and mallet were generally made of wood whilst the points, either single, grouped or arranged to form a comb were made of Citrus thorns, fish bone, bone, teeth and turtle and oyster shells.

A ‘Yue’ or barbarian statue of a tattooed man with short hair from the cultures of Southern China. (Zhejiang Provincial Museum)

Cemeteries throughout the Tarim Basin of western China have revealed several tattooed mummies with Western Asian/Indo-European physical traits and cultural materials. These date from between 2100 and 550 BC. In ancient China, tattoos were considered a barbaric practice associated with the Yue peoples of southeastern and southern China. Tattoos were often referred to in literature depicting bandits and folk heroes. As late as the Qing dynasty it was common practice to tattoo characters such as ‘Prisoner’ on convicted criminals’ faces. Although relatively rare during most periods of Chinese history, slaves were also sometimes marked to display ownership. However, tattoos seem to have remained a part of southern culture. Marco Polo wrote that “Many come hither from Upper India to have their bodies painted with the needle in the way we have elsewhere described, there being many adepts at this craft in the city”. The Indigenous peoples of North America have a long history of tattooing. Tattooing was not a simple marking on the skin: it was a process that highlighted cultural connections to Indigenous ways of knowing and viewing the world, as well as connections to family, society, and place. There has been no way to determine the actual origin of tattooing for these people, though it is known that the St. Lawrence Iroquoians had used bones as tattooing needles, and turkey bone tattooing tools were discovered at an ancient Fernvale, Tennessee site, dated back to 3500–1600 BC. Until recently, archeologists have not prioritised the classification of tattoo implements when excavating known historic sites, but recent review of materials found from one excavation site point towards elements of tattoo bundles that are from pre-colonisation times. Scholars explain that the recognition of tattoo implements is significant because it highlights the cultural importance of tattooing for indigenous people.

Ladies Of Secota.

The above is a page from Thomas Harriot’s book ‘A Brief and True Report of the New Found Land of Virginia’ showing a painting by John White. Markings on the skin represent tattoos that were observed. Early explorers to North America made many ethnographic observations about the people they met. Initially, they did not have a word for tattooing and instead described these modifications as to ‘’stamp, paint, burn, and embroider’ the skin. The Jesuit Relations of 1652 describes tattooing as “But those who paint themselves permanently do so with extreme pain, using, for this purpose, needles, sharp awls, or piercing thorns, with which they perforate, or have others perforate, the skin. Thus they form on the face, the neck, the breast, or some other part of the body, some animal or monster, for instance, an Eagle, a Serpent, a Dragon, or any other figure which they prefer; and then, tracing over the fresh and bloody design some powdered charcoal, or other black colouring matter, which becomes mixed with the blood and penetrates within these perforations, they imprint indelibly upon the living skin the designed figures. And this in some nations is so common that in the one which we called the Tobacco, and in that which – on account of enjoying peace with the Hurons and with the Iroquois – was called Neutral, I know not whether a single individual was found, who was not painted in this manner, on some part of the body”. It seems that the Inuit also have a deep history of tattooing. In the Inuit language of the eastern Canadian Arctic, the word ‘kakiniit’ translates to the English word for tattoo and the word ‘tunniit’ means face tattoo. Among the Inuit, some tattooed female faces and parts of the body symbolise a girl transitioning into a woman, coinciding with the start of her first menstrual cycle. A tattoo represented a woman’s beauty, strength, and maturity, and this was an important practice because some Inuit believed that a woman could not transition into the spirit world without tattoos on her skin. But European missionaries colonised the Inuit in the beginning of the twentieth century and associated tattooing as an evil practice, ‘demonizing’ anyone who valued tattoos.  But latterly people have talked to elder Inuit folk and these elders were able to recall the traditional practice of tattooing, which often included using a needle and thread and sewing the tattoo into the skin by dipping the thread in soot or seal oil, or through skin poking using a sharp needle point and dipping it into soot or seal oil. As a result, work has been done with the elders in their community to bring the tradition of kakiniit back by learning the traditional ways of tattooing and using their skills to tattoo others. However the Osage people, a Mid-western Native American tribe of the Great Plains used tattooing for a variety of different reasons. The tattoo designs were based on the belief that people were part of the larger cycle of life and integrated elements of the land, sky, water, and the space in between to symbolise these beliefs. These people also believed in the smaller cycle of life, recognising the importance of women giving life through childbirth and men removing life through warfare. Osage men were often tattooed after accomplishing major feats in battle, as a visual and physical reminder of their elevated status in their community. Some Osage women were tattooed in public as a form of a prayer, demonstrating strength and dedication to their nation. Meanwhile in central America a Spanish expedition led by Gonzalo de Badajoz in 1515 across what is today Panama ran into a village where prisoners from other tribes had been marked with tattoos. Except the Spaniards did find some slaves who were branded in a painful fashion. The natives cut lines in the faces of the slaves, using a sharp point either of gold or of a thorn; they then fill the wounds with a kind of powder dampened with black or red juice, which forms an indelible dye and never disappears. 

A tattooed man’s back, c. 1875.

Tattooing for spiritual and decorative purposes in Japan is thought to extend back to at least the Jomon or Paleolithic period and was widespread during various periods for both the Yamato and native Jomon groups. Chinese texts from before 300 AD described social differences among Japanese people as being indicated through tattooing and other bodily markings. Chinese texts from the time also described Japanese men of all ages as decorating their faces and bodies with tattoos. Generally firemen, manual workers and prostitutes wore tattoos to communicate their status, but by the early seventeenth century, criminals were widely being tattooed as a visible mark of punishment. Criminals were marked with symbols typically including crosses, lines, double lines and circles on certain parts of the body, mostly the face and arms. These symbols sometimes designated the places where the crimes were committed. In one area, the character for “dog” was tattooed on the criminal’s forehead. Then the Government of Meiji, formed in 1868, banned the art of tattooing altogether, viewing it as barbaric and lacking respectability. This subsequently created a subculture of criminals and outcasts. These people had no place in ‘decent’ society and were frowned upon. They could not simply integrate into mainstream society because of their obvious visible tattoos, forcing many of them into criminal activities which ultimately formed the roots for the modern Japanese mafia, the Yakuza, with which tattoos have become almost synonymous in Japan. It seems too that Thai-Khmer tattoos, also known as Yantra tattooing, was common since ancient times. Just as other native southeast Asian cultures, animistic tattooing was common in Tai tribes that were is southern China. Over time, this animistic practice of tattooing for luck and protection assimilated Hindu and Buddhist ideas. The Sak Yant traditional tattoo is practiced today by many and are usually given either by a Buddhist monk or a Brahmin priest. The tattoos usually depict Hindu gods and used different scripts which were the scripts of the classical civilisations of mainland southeast Asia.

A Bontoc warrior bearing a headhunter’s tattoo.

Tattooing, or ‘batok’ on both sexes was practiced by almost all ethnic groups of the Philippine Islands during the pre-colonial era. Ancient clay human figurines found in archaeological sites in the Batanes Islands, around 2500 to 3000 years old, have simplified stamped-circle patterns, which are believed to represent tattoos and possibly branding (also commonly practiced) as well. Excavations at the Arku Cave burial site in Cagayan Province in northern Luzon have also yielded both chisel and serrated-type heads of possible hafted bone tattoo instruments alongside Austronesian material culture markers like adzes, spindle whorls, bark-cloth beaters, and jade ornaments. These were dated to before 1500 BC and are remarkably similar to the comb-type tattoo chisels found throughout Polynesia. Tattoos are acquired gradually over the years, and patterns can take months to complete and heal. For many the tattooing processes are sacred events that involve rituals to ancestral spirits and the heeding of omens. For example, if the artist or the recipient sneezes before a tattooing, it was seen as a sign of disapproval by the spirits, and the session was called off or rescheduled. At one time artists were usually paid with livestock, heirloom beads, or precious metals. They were also housed and fed by the family of the recipient during the process. A celebration was usually held after a completed tattoo. The Māori people of New Zealand practised a form of tattooing known as ‘tā moko’, traditionally created with chisels. However, from the late twentieth century onwards, there has been a resurgence of tā moko taking on European styles amongst Maori. Traditional tā moko was reserved for head area. There is also a related tattoo art, kirituhi, which has a similar aesthetic to tā moko but is worn by non-Maori.

The earliest possible evidence for tattooing in Europe appears on ancient art from the Upper Paleolithic period as incised designs on the bodies of humanoid figurines. The Löwenmensch figurine from the Aurignacian culture dates to approximately 40,000 years ago and features a series of parallel lines on its left shoulder. The ivory ‘Venus of Hohle Fels’, which dates to between 35,000 and 40,000 years ago also exhibits incised lines down both arms, as well as across the torso and chest. The Picts may have been tattooed with elaborate, war-inspired black or dark blue woad designs. Julius Caesar described these tattoos in Book V of his ‘Gallic Wars’ (54 BC). Nevertheless, these may have been painted markings rather than tattoos. Raised in the aftermath of the Norman conquest of England, William of Malmesbury describes in his ‘Gesta Regum Anglorum’ that the Anglo-Saxons were tattooed upon the arrival of the Normans as ‘arms covered with golden bracelets, tattooed with coloured patterns’. The significance of tattooing was long open to Eurocentric interpretations. In the mid-nineteenth century, Baron Haussmann, whilst arguing against painting the interior of Parisian churches, said the practice “reminds me of the tattoos used in place of clothes by barbarous peoples to conceal their nakedness”. Meanwhile Greek written records of tattooing date back to at least the fifth-century BC. Both the ancient Greeks and Romans used tattooing to penalise slaves, criminals, and prisoners of war. However, in Egypt and Syria, known decorative tattooing was looked down upon and only religious tattooing was mainly practiced. In 316, emperor Constantine I made it illegal to tattoo the face of slaves as punishment. The Greek verb ‘stizein’, meaning ‘to prick’, was used for tattooing. Its derivative ‘stigma’ was the common term for tattoo marks in both Greek and Latin. It still fascinates me how our ‘modern’ words originated! During the Byzantine period, the verb ‘kentein’ replaced ‘stizein’, and a variety of new Latin terms replaced ‘stigmata’ including ‘signa’ or “signs”, ‘characteres’ or “stamps,” and ‘cicatrices’ or “scars”.

Despite a lack of direct textual references, tattooed human remains and iconographic evidence indicate that ancient Egyptians practiced tattooing from at least 2000 BC. It is theorised that tattooing entered Egypt through Nubia, but this claim is complicated by the high mobility between Lower Nubia and Upper Egypt as well as Egypt’s annexation of Lower Nubia around 2000 BC and one archeologist has argued that it may be more appropriate to classify tattoo in ancient Egypt and Nubia as part of a larger Nile Valley tradition. Ancient Egyptian tattooing appears to have been practiced on women exclusively; with an exception of a pre-dynastic male mummy found with “Dark smudges on his arm, appearing as faint markings under natural light, had remained unexamined. Infrared photography recently revealed that these smudges were in fact tattoos of two slightly overlapping horned animals. The horned animals have been tentatively identified as a wild bull (long tail, elaborate horns) and a Barbary sheep (curving horns, humped shoulder). Both animals are well known in Predynastic Egyptian art. The designs are not superficial and have been applied to the dermis layer of the skin, the pigment was carbon-based, possibly some kind of soot.” Two well-preserved Egyptian mummies from 4160 BC, a priestess and a temple dancer for the fertility goddess Hathor, bear random-dot and dash tattoo patterns on the lower abdomen, thighs, arms, and chest.

Meanwhile British and other pilgrims to the Holy Lands throughout the seventeenth century were tattooed with the Jerusalem Cross to commemorate their voyages. Between 1766 and 1779, Captain James Cook made three voyages to the South Pacific, the last trip ending with Cook’s death in Hawaii in February 1779. When Cook and his men returned home to Europe from their voyages to Polynesia, they told tales of the ‘tattooed savages’ they had seen. The word “tattoo” itself comes from the Tahitian word ‘tatau’, and was introduced into the English language by Cook’s expedition, though the word ‘tattoo’ or ‘tap-too’, referring to a drumbeat, had existed in English since at least 1644. It was in Tahiti aboard the Endeavour, in July 1769, that Cook first noted his observations about the indigenous body modification and is the first recorded use of the word tattoo to refer to the permanent marking of the skin. In the ship’s log book recorded this entry: “Both sexes paint their Bodys, Tattow, as it is called in their Language. This is done by inlaying the Colour of Black under their skins, in such a manner as to be indelible.” Cook went on to write, “This method of Tattowing I shall now describe…As this is a painful operation, especially the Tattowing of their Buttocks, it is performed but once in their Lifetimes.” Cook’s Science Officer and Expedition Botanist, Sir Joseph Banks, returned to England with a tattoo. Banks was a highly regarded member of the English aristocracy and had acquired his position with Cook by putting up what was at the time the princely sum of some ten thousand pounds in the expedition. In turn, Cook brought back with him a tattooed Raiatean man, Omai, who he presented to King George and the English Court. Many of Cook’s men, ordinary seamen and sailors, came back with tattoos, a tradition that would soon become associated with men of the sea in the public’s mind and the press of the day. In the process, sailors and seamen re-introduced the practice of tattooing in Europe, and it spread rapidly to seaports around the globe. By the nineteenth century, tattooing had spread to British society but was still largely associated with sailors and the lower or even criminal class. Tattooing had however been practiced in an amateur way by public schoolboys from at least the 1840s and by the 1870s had become fashionable among some members of the upper classes, including royalty. In its ‘upmarket’ form, it could be a lengthy, expensive and sometimes painful process. Tattooing spread among the upper classes all over Europe in the nineteeenth century, but particularly in Britain where it was estimated in Harmsworth Magazine in 1898 that as many as one in five members of the gentry were tattooed. Taking their lead from the British Court, where King George V followed King Edward VII’s lead in getting tattooed; King Frederick IX of Denmark, the King of Romania, Kaiser Wilhelm II, King Alexander of Yugoslavia and even Tsar Nicholas II of Russia all sported tattoos, many of them elaborate and ornate renditions of the Royal Coat of Arms or the Royal Family Crest. King Alfonso XIII of modern Spain also had a tattoo. The perception that there is a marked class division on the acceptability of the practice has been a popular media theme in Britain, as successive generations of journalists described the practice as newly fashionable and no longer for a marginalised class. Examples of this cliché can be found in every decade since the 1870s. Despite this evidence, a myth persists that the upper and lower classes find tattooing attractive and the broader middle classes rejecting it.

As World War I ravaged the globe, it also ravaged the popularity of tattooing, pushing tattoos even farther under the umbrella of delinquency. What credence tattoos got as symbols of patriotism and war badges in the eyes of the public, was demolished as servicemen moved away from the proud flags motifs and into more sordid depictions. At the beginning ofWorld War II, tattooing once again experienced a boom in popularity as now not only sailors in the Navy, but soldiers in the Army and fliers in the Air Force, were once again tattooing their national pride onto their bodies. During the Second World War, the Nazis, under the order of Adolph Hitler, rounded up those deemed inferior into concentration camps. Once there, if they were chosen to live, they were tattooed with numbers onto their arms. Tattoos and Nazism become intertwined, and the extreme distaste for Nazi Germany and Fascism led to a stronger public outcry against tattooing. This backlash would further worsen with use of a tattooed man in a 1950s Marlboro advertisement, which strengthened the public’s view that tattoos were no longer for patriotic servicemen, but for criminals and degenerates. The public distaste was so strong by this point, that usual trend of seeing tattoo popularity spike during times of war, was not seen in the Vietnam War. It would take two more decades for tattooing to finally be brought back into society’s good graces. Interestingly, throughout the world’s different military branches, tattoos are either regulated under policies or strictly prohibited to fit dress code rules. In the United Kingdom, as of 2022 the Royal Navy permits most tattoos, with certain restrictions: unless visible in a front-facing passport photo, obscene or offensive, or otherwise deemed inappropriate. The National Museum of the Royal Navy has presented an exhibit about the long history of tattoos among Navy service members, part of the tradition of sailor tattoos. In the United States, the United States Air Force regulates all kinds of body modification. Any tattoos which are deemed to be “prejudicial to good order and discipline”, or “of a nature that may bring discredit upon the Air Force” are prohibited. Specifically, any tattoo which may be construed as “obscene or advocate sexual, racial, ethnic or religious discrimination” is disallowed. Tattoo removal may not be enough to qualify; resultant “excessive scarring” may be disqualifying. Further, Air Force members may not have tattoos on their neck, face, head, tongue, lips or scalp. The United States Army permits soldiers to have tattoos as long as they are not on the neck, hands, or face, with exceptions existing for of one ring tattoo on each hand and permanent makeup. Additionally, tattoos that are deemed to be sexist, racist, derogatory, or extremist continue to be banned and the United States Navy has changed its policies and become more lenient on tattoos, allowing neck tattoos as long as one inch. Sailors are also allowed to have as many tattoos of any size on the arms and legs, as long as they are not deemed to be offensive tattoos. I have also found that the Indian Army tattoo policy has been in place since 11 May 2015. The government declared all tribal communities who enlist and have tattoos are allowed to have them all over the body only if they belong to a tribal community. Indians who are not part of a tribal community are only allowed to have tattoos in designated parts of the body such as the forearm, elbow, wrist, the side of the palm, and back and front of hands. Offensive, sexist and racist tattoos are not allowed. I have learned too that tattooing in the federal Indian boarding school system was commonly practiced during the 1960s and 1970s. Such tattoos often took the form of small markings or initials and were often used as a form of resistance; a way to reclaim one’s body. Due to the forced assimilation practices of the Western boarding schools, many indigenous cultural practices were on a severe decline, tattooing being one of them. As a way to retain their cultural heritage some students practiced this ritual and tattooed themselves with found materials like sewing needles and India Ink. Within the schools, the authorities physically labeled the students: “a personal identification number was written in purple ink on their wrists and on the small cupboard in which their few belongings were stored.” Students often had a tendency to tattoo their initials on this very spot; the exact place where the school authorities first marked them. This can be seen as a strong act of resistance where the students were physically rejecting their numerical ID, and reclaiming their own body and identity. Here in the United Kingdom, in 1969 the House of Lords debated a bill to ban the tattooing of minors, on grounds it had become “trendy” with the young in recent years but was associated with crime. It was noted that forty per cent of young criminals had tattoos and that marking the skin in this way tended to encourage self-identification with criminal groups. Since the 1970s, tattoos have become more socially acceptable and fashionable among celebrities. Tattoos are less prominent on figures of authority, and the practice of tattooing by the elderly is still considered remarkable. In recent history, authority figures have adopted the trend more widely; in Australia 65% of people in these professions are tattooed. Tattooing has also steadily increased in popularity since the invention of the electric tattoo machine. In 1936, 1 in 10 Americans had a tattoo of some form. Since the 1970s, tattoos have become a mainstream part of global and Western fashion, common among both sexes, to all economic classes, and to age groups from the later teen years to middle age. Tattoos have experienced a resurgence in popularity in many parts of the world, particularly in Europe, Japan, and North and South America. The growth in tattoo culture has seen an influx of new artists into the industry, many of whom have technical and fine arts training. Coupled with advancements in tattoo pigments and the ongoing refinement of the equipment used for tattooing, this has led to an improvement in the quality of tattoos being produced. Over the past three decades Western tattooing has become a practice that has crossed social boundaries from “low” to “high” class along with reshaping the power dynamics regarding gender. But it has its roots in “exotic” tribal practices of the Native Americans and Japanese, which are still seen in present times.

This week… thoughts.

If electricity comes from electrons, does morality come from morons?
A hangover is the wrath of grapes.

Click: Return to top of page or Index page

Writing

An official description of writing is “a cognitive  and social activity involving involving neuropsychological and physical processes and the use of writing systems to structure and translate human thoughts into persistent representations of human language”. A system of writing relies on many of the same semantic structures as the language it represents, such as lexicon and syntax, with the added dependency of a system of symbols representing that language’s phonology and morphology. Nevertheless, a written language may take on characteristics distinctive from any available in spoken language. The outcome of this activity, sometimes referred to as ‘text’, is a series of physically inscribed, mechanically transferred or digitally represented linguistic symbols. The interpreter or activator of a text is called a ‘reader’. Writing systems do not themselves constitute languages, with the debatable exception of computer languages, they are a means of rendering language into a form that can be read and reconstructed by other humans separated by time and/or space. Whilst not all languages use a writing system, those that do can complement and extend the capacities of spoken language by creating durable forms of language that can be transmitted across space, for example written correspondence, and stored over time in such places as libraries or other public records. Writing can also have knowledge-transforming effects, since it allows humans to externalise their thinking in forms that are easier to reflect on, elaborate on, reconsider, and revise. Any instance of writing involves a complex interaction amongst available tools, intentions, cultural customs, cognitive routines, genres, tacit and explicit knowledge, and the constraints and limitations of the writing system(s) deployed. Over time, inscriptions have been made with fingers, styluses, quills, ink brushes, pencils, pens and many styles of lithography. The surfaces used for these inscriptions have included stone tablets, clay tablets, bamboo slats, papyrus, wax tablets, vellum, parchment, paper, copperplate, slate, porcelain and other enamelled surfaces. The Incas used knotted cords known as quipu (or khipu) for keeping records. Writing tools and surfaces have been countlessly improvised throughout history, as the cases of graffiti, tattooing and impromptu aides-memoire illustrate. In fact I believe tattoos were put on sailors so that they could be identified more easily after their deaths.

An early Remington typewriter.

The typewriter and subsequently various digital word processors have recently become widespread writing tools, and studies have compared the ways in which writers have framed the experience of writing with such tools as compared with the pen or pencil. Advancements in technology have allowed certain tools in the form of software to produce better and better text editing, although human input is vital to prevent simple errors from creeping in! However, writing technologies from different eras coexist easily in many homes and workplaces. During the course of a day or even a single episode of writing, for example, a writer might instinctively switch between a pencil, a touchscreen, a text-editor, a whiteboard, a legal pad, and adhesive notes as different purposes arise. Nowadays so many of us learn to read and write as part of our upbringing and schooling, but it wasn’t always so. As human societies emerged, collective motivations for the development of writing were driven by pragmatic exigencies like keeping track of produce and other wealth, recording history, maintaining culture, codifying knowledge through curricula and lists of texts deemed to contain foundational knowledge, organising and governing societies through the formation of legal systems, census records, contracts, deeds of ownership, taxation, trade agreements, treaties, and so on. Around the fourth millennium BC, the complexity of trade and administration in Mesopotamia outgrew human memory and writing became a more dependable method for the permanent recording and presentation of transactions. Writing may have also evolved through calendric and political necessities for recording historical and environmental events. Further innovations included more uniform, predictable, and widely dispersed legal systems, the distribution of accessible versions of sacred texts and furthering practices of scientific inquiry and knowledge consolidation, all of which were largely reliant on portable and easily reproducible forms of inscribed language. In addition, the nearly global spread of digital communication systems such as e-mail and social media has made writing an increasingly important feature of daily life, where these systems mix with older technologies like paper, pencils, whiteboards, printers, and copiers. Substantial amounts of everyday writing characterise most workplaces in developed countries and in many occupations written documentation is not only the main deliverable but also the mode of work itself. Even in occupations not typically associated with writing, routine workflows have most employees writing at least some of the time.

Some professions are typically associated with writing, such as literary authors, journalists, and technical writers, but writing is pervasive in most modern forms of work, civic participation, household management, and leisure activities. Whether it be through business and finance, governance and law, the production and sharing of scientific and scholarly knowledge, journalism, technical and medical writing, literature and the leisure book market or writing within education and educational institutions, formal education is the social context most strongly associated with the learning of writing, and students may carry these particular associations long after leaving school. Alongside the writing that students read in the forms of textbooks, assigned books, and other instructional materials as well as self-selected books, students do much writing within schools at all levels, on subject exams, in essays, in taking notes, in doing homework, and in formative as well as summative assessments.  Some of this is explicitly directed toward the learning of writing, but much is focused more on subject learning. Students receive much writing from their teachers as well in the forms of assignments and syllabi, directions for activities, worksheets, corrections on work, or information about subjects or exams. Students also receive institutional notices and regulations, sometimes to be shared with families. Students may also write teacher evaluations for use by teachers to improve instruction or by others reviewing quality of teacher instruction, particularly within higher education. Writing also pervades schools and educational institutions in less visible and memorable ways. Since schools are typically hierarchically arranged bureaucracies, writing also circulates in the forms of notices and regulations that teachers receive from their supervisors and arrange their instruction according to district and state syllabi and regulations.  Teachers must often produce and submit lesson plans or other information about their teaching. In primary and secondary education teachers may need to write notices or letters to parents about matters relating to their children’s learning, school activities, or regulations. Within school hierarchies many memos, notices, or other documents may flow. National policies and regulations as elaborated by ministries or departments of education may also be of consequence. Additionally, research in the various subject areas and in educational studies may be attended to by educators in the classroom and higher bureaucratic levels.

A sample code in the ‘C’ programming language that displays ‘Hello, World!’ when executed.

With the onset of computers has come computer programming in different languages and over the years these have become easier to follow. I myself learned simple computer language back in the 1980’s and used the knowledge quite effectively whilst at work. One language I learned is called BASIC, standing for Beginners All-purpose Symbolic Instruction Code. Quite clever, I think. Because software development is the process of conceiving, specifying, designing, programming, documenting, testing and bug fixing involved in creating and maintaining applications, frameworks or other software components. And bugs do develop, most especially when several people are writing the programs. There is a great quote from Star Trek, where ’Scotty’ says “The more they overtake the plumbing, the easier it is to stop up the drain!”. It is true, because software development involves writing and maintaining the source code, but in a broader sense, it includes all processes from the conception of the desired software through the final manifestation, typically in a planned and structured process often overlapping with software engineering. Software development also includes research, new development, prototyping, modification, reuse, re-engineering, maintenance, or any other activities that result in software products. But it had to start somewhere and one way was using an alphabet, which is a set of written symbols that represent consonants and vowels. In a perfectly phonological alphabet, the letters would correspond perfectly to the language’s ‘phonemes’, these being any of the perceptually distinct units of sound in a specified language that distinguish one word from another, for example p, b, d, and t in the English words pad, pat, bad and bat. So a writer could predict the spelling of a word given its pronunciation, and a speaker could predict the pronunciation of a word given its spelling. However, as languages often evolve independently of their writing systems, and writing systems have been borrowed for languages they were not designed for, the degree to which letters of an alphabet correspond to phonemes of a language vary greatly from one language to another and even within a single language. Sometimes the term ‘alphabet’ is restricted to systems with separate letters for consonants and vowels, such as the Latin alphabet and because of this use, Greek is often considered to be the first language with an alphabet. In most of the alphabets of the Middle East, it is usually only the consonants of a word that are written, although vowels may be indicated by the addition of various diacritical marks. Writing systems based primarily on writing just consonants phonemes date back to the hieroglyphs of ancient Egypt. Such systems are called ‘abjads’, derived from the Arabic word for alphabet. But in most of the alphabets of India and south-east Asia, vowels are indicated through diacritics or modification of the shape of the consonant and these are called ‘abugidas’. Some abugidas, such as Ethiopic and Cree, are learned by children as syllabaries, and so are often called ‘syllabics’. However, unlike true syllabaries, there is not an independent glyph for each syllable. There are also ‘Featural’ scripts, which represent the features of the phonemes of the language in consistent ways and an example of such a system is Korean hangul For instance, all labial sounds, ones pronounced with the lips, may have some element in common. In the Latin alphabet, this is accidentally the case with the letters “b” and “p”; however, labial “m” is completely dissimilar, and the similar-looking “q” and “d” are not labial. In the Korean hangul however, all four labial consonants are based on the same basic element, but in practice, Korean is learned by children as an ordinary alphabet, and the featural elements tend to pass unnoticed. Another featural script is SignWriting, the most popular writing system for many sign languages, where the shapes and movements of the hands and face are represented iconically. Such scripts are also common in fictional or invented systems.

A Narmer Palette, with the two ‘Serpopards’ representing unification of Upper and Lower Egypt, circa 3100 BC.

The origins of writing are older than perhaps some may think though. For example, a stone slab with 3,000-year-old writing, known as the Cascajal Block, was discovered in the Mexican state of Veracruz and is an example of the oldest script in the Western Hemisphere, preceding the oldest Zapotec writing by approximately 500 years and  is thought to be Olmec who are the earliest known major Mesosmerican civilisation. Of the several pre-Columbian scripts that have been found in Mesoamerica, the one that appears to have been best developed and to be really deciphered is the Maya script. The earliest inscription identified as Maya dates to the third century BC. This writing used logograms complemented by a set of syllabic glyphs, somewhat similar in function to modern Japanese writing. In 2001, archaeologists discovered that there was a civilisation in Central Asia that used writing c. 2000 BC. An excavation near Ashgabat, the capital of Turkmenistan, revealed an inscription on a piece of stone that was used as a stamp seal. Meanwhile the earliest surviving examples of writing in China, inscriptions on so-called ‘oracle bones’, tortoise plastrons (the bony plate forming the ventral part of the shell of a tortoise or turtle) and ox scapulae, both used for divination, date from around 1200 BC in the late Shang dynasty. A small number of bronze inscriptions from the same period have also survived. In 2003, archaeologists reported discoveries of isolated tortoise-shell carvings dating back to the seventh millennium BC, but whether or not these symbols are related to the characters of the later oracle-bone script is disputed. This The Narmer Palette, also known as the Great Hierakonpolis Palette or the Palette of Narmer, is a significant Egyptian archaeological find, dating from about 3100 BC, belonging, at least nominally, to the category of cosmetic palettes. It contains some of the earliest hieroglyphic inscriptions ever found. These hieroglyphs date back to the clay labels of a Predynastic ruler called ‘Scorpion I’ and were recovered at Abydos (modern Umm el-Qa’ab) in 1998. There are also several recent discoveries that may be slightly older, though these glyphs were based on a much older artistic rather than written tradition. The hieroglyphic script was logographic with phonetic adjuncts that included an effective alphabet. The world’s oldest deciphered sentence was found on a seal impression found in the tomb of Seth-Peribsen at Umm el-Qa’ab, which dates from the Second Dynasty, 28th or 27th century BC. There are around 800 hieroglyphs dating back to the Old Kingdom, Middle Kingdom and New Kingdom Eras. By the Greco-Roman period, there were apparently more than 5,000. Writing was very important in maintaining the Egyptian empire, and literacy was concentrated among an educated elite of scribes. Only people from certain backgrounds were allowed to train to become scribes, in the service of temple, pharaonic, and military authorities. The hieroglyph system was always difficult to learn, but in later centuries was purposely made even more so, as this apparently preserved the scribes’ status. The world’s oldest known alphabet appears to have been developed by Canaanite turquoise miners in the Sinai desert around the mid-nineteenth century BC. Around thirty crude inscriptions have been found at a mountainous Egyptian mining site known as Serabit el-Khadem. This site was also home to a temple of Hathor, the ‘Mistress of turquoise’. A later, two line inscription has also been found at Wadi el-Hol in Central Egypt. Based on hieroglyphic prototypes, but also including entirely new symbols, each sign apparently stood for a consonant rather than a word; the basis of an alphabetic system. It seems that it was not until the twelfth to ninth centuries, however, that the alphabet took hold and became widely used.

Over the centuries in Iran, three distinct Elamite scripts developed. Proto-Elamite is the oldest known writing system from there. In use only for a brief time (c. 3200–2900 BC), clay tablets with Proto-Elamite writing have been found at different sites across Iran, with the majority having been excavated at Susa, an ancient city located east of the Tigris and between the Karkheh and Dez Rivers. This Proto-Elamite script consists of more than 1,000 signs and is thought to be partly logographic. Meanwhile in Europe, notational signs from 37,000 years ago in caves, apparently convey calendaric meaning about the behaviour of animal species drawn next to them, and are considered the first known proto-writing in history. Hieroglyphs are found on artefacts of Crete from the early to mid-second millennium BC, and whilst the writing system of the Mycenaean Greeks has been deciphered, others have yet to be deciphered. In the Indus Valley, there is Indus script which refers to short strings of symbols associated with a civilisation there and which spanned modern-day Pakistan and North India used between 2600 and 1900 BC. In spite of many attempts at decipherments and claims, it is as yet undeciphered. The term ‘Indus script’ is mainly applied to that used in the mature Harappan phase, which perhaps evolved from a few signs found in early Harappa after 3500 BC, and was followed by the mature Harappan script. So, whilst research into the development of writing during the late Stone Age is ongoing, the current consensus is that it first evolved from economic necessity in the ancient Near East. Writing most likely began as a consequence of political expansion in ancient cultures, which needed reliable means for transmitting information, maintaining financial accounts, keeping historical records, and similar activities. Around the fourth millennium BC, the complexity of trade and administration outgrew the power of memory, and writing became a more dependable method of recording and presenting transactions in a permanent form so the invention of the first writing systems is roughly contemporary with the beginning of the Bronze Age of the late fourth millennium BC. It is generally agreed that Sumerian writing was an independent invention, however it is debated whether Egyptian writing was developed completely independently of Sumerian, or was a case of cultural diffusion.

Globular envelope with a cluster of accountancy tokens, Uruk period, from Susa. These are presently in the Louvre Museum.

In approximately 8000 BC, the Mesopotamians began using clay tokens to count their agricultural and manufactured goods. Later they began placing these tokens inside large, hollow clay containers (bulla, or globular envelopes) which were then sealed. The quantity of tokens in each container came to be expressed by impressing, on the container’s surface, one picture for each instance of the token inside. They next dispensed with the tokens, relying solely on symbols for the tokens, drawn on clay surfaces. To avoid making a picture for each instance of the same object (for example: 100 pictures of a hat to represent 100 hats), they ‘counted’ the objects by using various small marks. In this way the Sumerians added “a system for enumerating objects to their incipient system of symbols”. The original Mesopotamian writing system was derived around 3200 BC from this method of keeping accounts and by the end of the fourth millennium BC the Mesopotamians were using a triangular-shaped stylus pressed into soft clay to record numbers. This system was gradually augmented with using a sharp stylus to indicate what was being counted by means of pictographs. Round-stylus and sharp-stylus writing was gradually replaced by writing using a wedge-shaped stylus, at first only for logograms but later also for phonetic elements. Around 2700 BC, cuneiform began to represent syllables of spoken Sumerian. About that time, Mesopotamian cuneiform became a general purpose writing system for logograms, syllables, and numbers. This script was adapted to another Mesopotamian language, the East Semitic around 2600 BC, and then to others. With the adoption of Aramaic as the ‘lingua franca’ of the Neo-Assyrian Empire (911–609 BC), Old Aramaic was also adapted to Mesopotamian cuneiform. The Phoenician writing system was adapted from the Proto-Canaanite script sometime before the fourteenth century BC, which in turn borrowed principles of representing phonetic information from Egyptian hieroglyphs. This writing system was an odd sort of syllabary in which only consonants are represented. This script was adapted by the Greeks, who adapted certain consonantal signs to represent their vowels. The Cumae alphabet, a variant of the early Greek alphabet, gave rise to the Etruscan alphabet and its own descendants, such as the Latin alphabet and Runes. Other descendants from the Greek alphabet include Cyrillic, used to write Bulgarian, Russian and Serbian, amongst others. The Phoenician system was also adapted into the Aramaic script, from which both Hebrew and Arabic scripts are descended.

But, in the history of writing, religious texts or writings have played a special role. For example, some religious text compilations have been some of the earliest popular texts, or even the only written texts in some languages, and in some cases are still highly popular around the world. The first books printed widely using the printing press were bibles. Such texts enabled rapid spread and maintenance of societal cohesion, collective identity, motivations, justifications and beliefs. Nowadays there are numerous programmes in place to aid both children and adults in improving their literacy skills. These resources, and many more, span across different age groups in order to offer each individual a better understanding of their language and how to express themselves via writing in order to perhaps improve their socio-economic status. One quote I like is “Did you ever notice that, when people become serious about communication, they want it in writing?”

This week… A few Yorkshire medical terms.

Artery – The study of paintings
Bacteria – Back door to cafeteria
Cauterise – Made eye contact with her

Click: Return to top of page or Index page

Southwell Minster

Some years ago, whilst still living in Peterborough, I was privileged to join a small choir which, as well as performing concerts locally, also travelled to a few different cathedrals where we “stood in” for the resident choirs. So I got to sing in some really lovely places with a great group of people. We were small in number, at most about a dozen and it was hard work at times, but we all enjoyed it. One such place was Southwell Minster, which is both a minster and cathedral in Southwell, Nottinghamshire. It is situated six miles (9.7 km) from Newark-on-Trent and thirteen miles (21 km) from Mansfield. It is the seat of the Bishop of Southwell and Nottingham as well as the Diocese of Southwell and Nottingham. It is a grade I listed building. Much of the main fabric is ‘Romanesque’, or Norman in traditional English terminology but the Minster is most famous for the Gothic chapter house which was begun in 1288, with carved capitals representing different species of plants. To clarify, Minster is an honorific title given to particular churches in England, most notably York Minster in Yorkshire, Westminster Abbey in London and Southwell Minster here in Nottinghamshire.The term ‘minster’ is first found in royal foundation charters of the seventh century, when it designated any settlement of clergy living a communal life and endowed by charter with the obligation of maintaining the daily office of prayer. Widespread in tenth-century England, minsters declined in importance with the systematic introduction of parishes and parish churches from the eleventh century onwards. The term continued as a title of dignity in later medieval England, for instances where a cathedral, monastery, collegiate church or parish church had originated with an Anglo-Saxon foundation. Eventually a minster came to refer more generally to “any large or important church, especially a collegiate or cathedral church”. In the twenty-first century, the Church of England has designated additional minsters by bestowing the status on certain parish churches, the most recent elevation to minster status being St Mary Magdalene church in Taunton, Somerset on 13 March 2022, bringing the total number of current Church of England minsters to thirty-one. As for Southwell, the earliest church on the site is believed to have been founded in 627AD by Paulinus, the first Archbishop of York, when he visited the area whilst baptising believers in the River Trent. The legend is actually commemorated in the Minster’s baptistry window. In 956AD King Eadwig gave land in Southwell to Oskytel, Archbishop of York, on which a minster church was established. The Domesday Book of 1086 recorded the Southwell manor in great detail. The Norman reconstruction of the church began in 1108, probably as a rebuilding of the Anglo-Saxon church, starting at the east end so that the high altar could be used as soon as possible and the Saxon building was dismantled as work progressed. Many stones from this earlier Anglo-Saxon church were reused in the construction. The tessellated floor and late eleventh century tympanum in the north transept are the only parts of the Anglo-Saxon building remaining intact. Work on the nave began after 1120 and the church was completed by c.1150. The church was originally attached to the Archbishop of York’s Palace which stood next door but is now ruined. It served the archbishop as a place of worship and was a collegiate body of theological learning, hence its designation as a minster. The minster draws its choir from the nearby school with which it is associated. In my research I found a comment saying that the Norman chancel was square-ended, but I can find no relevance to this elsewhere. The chancel was replaced with another in the Early English style in 1234–51 because it was too small. The octagonal chapter house, built starting in 1288 with a vault in the Decorated Gothic style has naturalistic carvings of foliage. The elaborately carved ‘pulpitum’ or choir screen was built in 1320–40. The church suffered less than many others during the English Reformation as it was re-founded in 1543 by Act of Parliament. Southwell is where King Charles I surrendered to Scottish Presbyterian troops in 1646 during the English Civil War, after the third siege of Newark. The fighting saw the church seriously damaged and the nave is said to have been used as stabling. The adjoining palace was almost completely destroyed, first by Scottish troops and then by the local people, with only the Hall of the Archbishop remaining as a ruined shell. Then on 5 November 1711 the southwest spire was struck by lightning, and the resulting fire spread to the nave, crossing and tower destroying roofs, bells, clock and the organ. By 1720 repairs had been completed, now giving a flat panelled ceiling to the nave and transepts. In 1805, Archdeacon Kaye gave the Minster the Newstead lectern which was once owned by Newstead Abbey. It had been thrown into the abbey fishpond by the monks to save it during the Dissolution of the Monasteries, then later discovered when the lake was dredged! Then in 1818, Henry Gally Knight gave the Minster four panels of sixteenth century Flemish glass (which now fill the bottom part of the East window) which he had acquired from a Parisian pawnshop. In 1805 the spires were found to be in danger of collapse, they were re-erected in 1879–81 when the minster was extensively restored by Ewan Christian, an architect specialising in churches. The nave roof was replaced with a pitched roof and the quire was redesigned and refitted.

Southwell rood screen (pulpitum) from the choir.

The whole place has quite an ecclesiastical history, and Southwell Minster was served by prebendaries from the early days of its foundation. By 1291 there were sixteen Prebends of Southwell mentioned in the Taxation Roll. In August 1540, as the dissolution of the monasteries was coming to an end, and despite its collegiate rather than monastic status, Southwell Minster was suppressed specifically in order that it could be included in the plans initiated by King Henry VIII to create several new cathedrals. It appears to have been proposed as the see for a new diocese comprising both Nottinghamshire and Derbyshire as a replacement for Welbeck Abbey which had been dissolved in 1538 and which by 1540 was no longer owned by the Crown. The plan for the minster’s elevation did not proceed, so in 1543 Parliament reconstituted its collegiate status as before. In 1548 it again lost its collegiate status under the 1547 Act of King Edward VI which suppressed, amongst others, almost all collegiate churches. At Southwell, the prebendaries were given pensions and the estates sold, whilst the church continued as the parish church on the petitions of the parishioners. By an Act of Philip and Mary in 1557, the minster and its prebends were restored and in 1579 a set of statutes was promulgated by Queen Elizabeth I. The chapter operated under this constitution until it was dissolved in 1841. The Ecclesiastical Commissioners made provision for the abolition of the chapter as a whole such that the death of each canon after this time resulted in the extinction of his prebend. The chapter came to its appointed end on 12 February 1873 with the death of Thomas Henry Shepherd, rector of Clayworth and prebendary of Beckingham. Despite the plans to make Southwell Minster a cathedral in August 1540 not initially coming to fruition, 344 years later in 1884 Southwell Minster became a cathedral proper for Nottinghamshire and a part of Derbyshire including the city of Derby. In 1927 the diocese was divided and the Diocese of Derby was formed.

Compartments of the nave, interior and exterior.

Architecturally, the nave, transepts, central tower and two western towers of the Norman church which replaced the Anglo-Saxon minster remain as an outstanding achievement of severe Romanesque design. With the exception of fragments mentioned above, they are the oldest part of the existing church.

The nave is of seven bays, plus a separated western bay. The columns of the arcade are short and circular, with small scalloped capitals. The triforium has a single large arch in each bay and the clerestory has small round-headed windows whilst the external window openings are circular. There is a tunnel-vaulted passage between the inside and outside window openings of the clerestory, the nave aisles are vaulted, the main roof of the nave is a trussed rafter roof, with tie-beams between each bay, these being a late nineteenth century replacement. By contrast with the nave arcade, the arches of the crossing are tall, rising to nearly the full height of the nave walls. The capitals of the east crossing piers depict scenes from the life of Jesus. Two stages of the inside of the central tower can be seen at the crossing, with cable and wave decoration on the lower order and zigzag on the upper. The transepts have three stories with semi-circular arches, like the nave, but without aisles.

Rib vault of Southwell Minster choir.

The western facade has pyramidal spires on its towers – a unique feature today, though common in the twelfth century. The existing spires date only from 1880, but they replace those destroyed by fire in 1711, which are documented in old illustrations. The large west window dates from the fifteenth century. The central tower’s two ornamental stages place it high among England’s surviving Norman towers and whilst the lower order has intersecting arches, the upper order has plain arches. The north porch has a tunnel vault, and is decorated with intersecting arches. The choir is Early English in style, and was completed in 1241 and it has transepts, thus separating the choir into a western and eastern arm. The choir is of two storeys, with no gallery or triforium. The lower storey has clustered columns with multiform pointed arches, the upper storey has twin lancet arches in each bay. The rib vault of the choir springs from clustered shafts which rest on corbels. The vault has ridge ribs. The square east end of the choir has two stories each of four lancet windows.

Entrance portal of the chapter house with the famous carved foliage.

In the 14th century the chapter house and the choir screen were added. The chapter house, started in 1288, is in an early decorated style, octagonal, with no central pier. It is reached from the choir by a passage and vestibule, through an entrance portal. This portal has five orders, and is divided by a central shaft into two subsidiary arches with a circle with quatrefoil above. Inside the chapter house, the stalls fill the octagonal wall sections, each separated by a single shaft with a triangular canopy above. The windows are of three lights, above them two circles with trefoils and above that a single circle with quatrefoil. This straightforward description gives no indication of the glorious impression, noted by so many writers, of the elegant proportions of the space, and of the profusion (in vestibule and passage, not just in the chapter house) of exquisitely carved capitals and tympana, mostly representing leaves in a highly naturalistic and detailed representation. The capitals in particular are deeply undercut, adding to the feeling of realism. Individual plant species such as ivy, maple, oak, hop, hawthorn can often be identified. The rood screen dates from 1320 to 1340, and is an outstanding example of the Decorated style. It has an east and west facade, separated by a vaulted space with flying ribs. The east facade, of two storeys, is particularly richly decorated, with niches on the lower storey with ogee arches, and openwork gables on the upper storey. The central archway rises higher than the lower storey, with an ogee arch surmounted by a cusped gable. The finest memorial in the minster is the alabaster tomb of Edwin Sandys, Archbishop of York, who died in 1588. As for choirs, the Cathedral Choir comprises the boy choristers, girl choristers, and lay clerks who, between them, provide music for seven choral services each week during school terms. The boys and girl choristers usually sing as separate groups, combining for particularly important occasions such as Christmas and Easter services, and notable events in the life of the minster. Regular concerts and international tours are a feature of the choir’s work. Services have been sung in Southwell Minster for centuries, and the tradition of daily choral worship continues to thrive. There was originally a college of vicars choral who took the lead as singers, one or two of whom were known as ‘rector chori’, or ‘ruler of the choir’. The vicars choral lived in accommodation where Vicars Court now stands, and lived a collegiate lifestyle.The current Cathedral Choir owes its form to the addition of boy choristers to the vicars choral, and the vicars themselves eventually being replaced by lay singers, known as lay clerks. For a large period of time, the format remained very similar, with a number of boy choristers singing with a mixture of lay clerks and vicars choral, slowly becoming a group of entirely lay singers. Eventually, in 2005, a girls’ choir was started by the Assistant Director of Music, who have now been formally admitted as girl choristers. All of the choristers are educated at the Minster School, a Church of England academy with a music-specialist Junior Department (years 3–6) for choristers and other talented young musicians. The Cathedral Choir has an enviable reputation for excellence, and has recorded and broadcast extensively over the years. Regular concerts and international tours are a feature of the choir’s work, alongside more local events such as civic services and the annual Four Choirs’ Evensong together with the cathedral choirs of Derby, Leicester and Coventry.

The Cathedral Choir can be heard singing at evensongs at 5.30 pm every weekday (except Wednesday), and on Saturdays and Sundays. In addition, there is ‘The Minster Chorale’, which is Southwell Minster’s auditioned adult voluntary choir, and is directed by the Minster’s Assistant Director of Music, Jonathan Allsopp. Founded in 1994, the Chorale’s purpose is to regularly sing for services, especially at times when the Cathedral Choir is not available. In particular, the Chorale sings for a mixture of services throughout the year. In addition to its regular round of services, one of the highlights of the Chorale’s year is its annual performance of Handel’s Messiah in the run-up to Christmas and this concert is a staple of the Minster’s Christmas programme, so is always packed out. The Chorale also regularly goes on tour, in recent years they have toured to the Channel Islands and the Scilly Isles. A 2020 tour to Schwerin, Germany was planned (together with Lincoln Cathedral Consort), but this was cancelled due to the Coronavirus pandemic. The Chorale also visits other cathedrals to sing services, and recently has been to York Minster. Southwell Minster Chorale rehearses weekly during term-time on a Friday from 7:45 pm – 9:15 pm. The Chorale also enjoys a good social life, with regular trips to the pub after rehearsals and for Sunday lunches.The minster is also home to the annual Southwell Music Festival, held in late August. However, as previously mentioned, many years ago I was able to sing the services as part of a visiting choir. We were small in number, so I had to sing up a bit – but we managed!


To end with, I have found a couple of old illustrations.

Southwell Minster before the original spires were destroyed by fire in 1711.
Without the spires, which were removed in 1805 and replaced in 1879-81.

This week… Dictators.

“Speaking openly about dictators is like stepping on the tail of a snake. Do so and it will turn and bite you. To kill it, you must chop off its head.” ~ Author unknown.

Click: Return to top of page or Index page

Attleborough, Norfolk

This week I write a little about Attleborough in Norfolk, where one of my older brothers lived with his wife and family for some years. It is also where my parents retired to until they passed away and whose ashes are now adjacent in the local churchyard. Attleborough is a market town and civil parish located on the A11 between Norwich and Thetford in Norfolk. The parish is in the district of Breckland and has an area of 8.5 square miles, ( 21.9 square kilometres). The 2001 Census recorded the town as having a population of 9,702 which was distributed between 4,185 households, increasing to a population of 10,482 in 4,481 households in the 2011 Census. Attleborough is in the Mid-Norfolk constituency of the UK Parliament, represented since the 2010 general election by a Conservative MP. Attleborough railway station provides a main line rail service to both Norwich and Cambridge. As to the town’s history, the Anglo-Saxon foundation of the settlement is unrecorded. A popular theory of the town’s origin makes it a foundation of an ‘Atlinge’, and certainly ‘burgh’ indicates that it was fortified at an early date. According to the mid-twelfth century ‘hagiographer’ of Saint Edmund, Geoffrey of Wells, Athla was the founder of the Ancient and royal town of Attleborough in Norfolk. Incidentally, this taught me a new word, ‘hagiography’ which it seems comes from Ancient Greek  ‘hagios’ or ‘holy’, and ‘graphia’ or ‘writing’, which is a biography of a saint or an ecclesiastical leader, as well as, by extension, an adulatory and idealised biography of a preacher, priest, founder, saint, monk, nun or icon in any of the world’s religions. Early Christian hagiographies might consist of a biography or ‘vita’, a description of the saint’s deeds or miracles (from Latin ‘vita, meaning life, which begins the title of most medieval biographies), an account of the saint’s martyrdom (called a ‘passio’), or be a combination of these. Christian hagiographies focus on the lives, and notably the miracles, ascribed to men and women canonised by the Roman Catholic church, the Eastern Orthodox Church, the Oriental Orthodox churches and the Church of the East. Other religious traditions such as Buddhism, Hinduism, Taoism, Islam, Sikhism  and Jainism also create and maintain hagiographical texts concerning saints, gurus and other individuals believed to be imbued with sacred power. Hagiographic works, especially those of the Middle Ages, can incorporate a record of institutional and local history, and evidence of popular cults, customs, and traditions.

The village sign in Attleborough.

In the Domesday Book of 1086, the town is referred to as ‘Attleburc’, but after the Danes swept across Norfolk and seized Thetford, it is believed that the Saxons rallied their forces at Attleborough and probably threw up some form of protection. Although the Saxons put up a vigorous resistance, they eventually capitulated to the Danes and during the time of King Edward the Confessor, powerful Danish families like Toradre and Turkill controlled local manors. If local records are correct, nothing but disaster was brought to Attleborough by the Danes, and it took the coming of King William the Conqueror to restore some sense of well-being to the area. Turkill relinquished his hold on the area to the Mortimer family towards the end of King William’s reign, and they governed Attleborough for more than three centuries. In the fourteenth century the Mortimer family founded the Chapel of the Holy Cross (being the south transept of Attleborough Church) and about a century later, Sir Robert de Mortimer founded the College of the Holy Cross, then the nave and aisles were added to accommodate the congregation. Then, following King Henry VIII’s Dissolution of the Monasteries, the building was virtually destroyed by Robert Radcliffe, Lord Fitz Walter, Earl of Sussex, and material from the building was used for making up the road between Attleborough and Old Buckenham. However, this left Attleborough Church with a tower at the east end. A great part of the town was destroyed by fire in 1559 and it was during that period that the Griffin Hotel was then built. It was in the cellars of the Griffin hotel that prisoners on their way to the March Assizes in Thetford were confined overnight, tethered by chains to rings in the wall. The arrival of the prisoners aroused a great deal of public interest, and eventually traders set up a fair whenever they came. This became known as the ‘Attleborough Rogues Fair’ and was held on the market place on the last Thursday in March. Also on the market place, festivities took place on Midsummer Day when the annual guild was held. It appears that there has been the right to hold a weekly Thursday market in the town since 1285. A weekly market is still held and has recently (in 2004) returned to Queen’s Square where it is presumed the market was originally held. The first toll or turnpike road in England is reputed to have been created here at the end of the seventeenth century as Acts of Parliament were passed in 1696 and 1709, “For the repairing of the highway between Wymondham and Attleborough, in the County of Norfolk, and for including therein the road from Wymondham to Hethersett”. The first national census of 1801 listed the population of Attleborough as 1,333. By 1845 Attleborough certainly dominated the surrounding parishes with a population of nearly 2,000, and in that year the Norwich to Brandon railway arrived. The town supported six hostelries, these being The Griffin (the oldest), the Angel, the Bear, the Cock, the Crown and the White Horse. The Griffin, the Bear and the Cock still operate but the Crown is now a youth centre and the Angel is a building society branch office. Nothing is known of the fate of the White Horse after 1904, although the White Horse building still exists as a private house. There are currently two more public houses, these being The London Tavern and the Mulberry Tree, which is also a restaurant. At the centre of the town is Queens Square, at one time referred to as Market Hill. In 1863 a corn exchange owned by a company of local farmers was built in the High Street and in 1896 the Gaymer’s cider-making plant was built on the south side of the railway and soon became established as the largest employer in the town. The factory has now closed for cider-making, which has moved to Shepton Mallet in Somerset, but it has since re-opened as a chicken processing plant, and the corn exchange is now a local Indian restaurant. The First World War affected Attleborough probably no better or worse than many similar small towns. Five hundred and fifty men joined the armed forces and ninety-six did not return. The 1920s saw continuing growth as a market centre, held on a Thursday, with the stalls spread along the pavements of Church Street and in an open area by the Angel Hotel opposite the Griffin Inn. It was the turkey sales which made the town a thriving market centre in the 1930s, and thousands were sold each year on Michaelmas Day. Local employment still largely revolved round Gaymer’s cider works. Well into the 1930s lighting was by oil lamps, then came the building of the gas works in Queens Road (since demolished, although the Gas Keeper’s House is still there). Gradually gas was piped into homes, but it was a slow process. In the early 1930s the Corn Hall was sold and became a cinema, reaching its heyday in the early 1940s. During 1939 the old post office was sold and it became the Doric Restaurant in Queens Square and it is now the town hall. The new post office was built in Exchange Street. There were two local airfields during World War II, one at Deopham Green (Station 142) and one at Old Buckenham (Station 144). Structurally the town changed little during the 1950s and there were no great leaps in population growth, other than the arrival of the notorious London gangsters, the Kray twins, who took over a local hostelry. The 1960s were different, the overspill programme and new town development brought new families into south Norfolk. Attleborough had to make decisions for the future and new development zones were designated. The first estate programme began with the building of the council-owned Cyprus Estate, which has since been complemented by other private housing schemes such as Fairfields and Ollands built mainly in the 1970s, and a large estate on the south side of the town in the 1990s. The traditional traffic route along the A11 trunk road became a bottleneck as it ran both ways along High Street and Church Street, so in the 1970s a one-way system was opened and this channelled traffic around the natural ring road surrounding the church. The volume of traffic continued to increase making that change obsolete so the Attleborough bypass was opened in 1984. The bypass was widened and completed in 2007, removing the only single-lane section of the A11 between Thetford and Norwich.

Attleborough parish church.

The parish church of the Assumption of the Blessed Virgin Mary is partly Norman and partly fourteenth century. The east end of the church is Norman and the nave is late fourteenth century. In 1368 the College of Holy Cross was founded in the Norman part and at that time the nave was built for the use of the parish. The remarkable rood screen has the loft intact for its full width but has often been restored. It is one of the finest rood screens in Norfolk and above are frescoes of c. 1500, since much-mutilated. Meanwhile the Eastern Baptist Association meets at the Baptist church in Leys Lane and the Methodist church in London Road has stopped holding normal services but still provides rooms to hire for community use. This building was designed by the architect Augustus Scott in Gothic Revival style, and in 1913 replaced the Primitive Methodist building in Chapel Road, since demolished. As for the educational facilities in the town there are three schools, these being Attleborough Academy on Norwich Road, Rosecroft Primary School on London Road and Attleborough Primary School on Besthorpe Road. Wymondham College, a large state boarding school, is located just outside of the town. Industrially, Banham Poultry  is based on Station road and has an annual turnover of £100 million. Its chairman was Michael Foulger, who is also deputy chairman of Norwich City football club. There have been a few notable residents of the town, namely the composer Malcolm Arnold who lived in the town from the late 1980s until his death in September 2006, the professional footballers John Fashanu and his brother Justin Fashanu who lived in and went to school in Attleborough, the racing driver Ayrton Senna (1960-1994) who lived in Attleborough during his early years in international motorsport through to his time in Formula One and Brandon Francis, (born Justin Christopher Davis),an actor and writer who lived in and went to school in Attleborough.

This week… a quote.

“Some people drink deeply from the fountain of knowledge. Others just gargle.”

~ Grant M. Bright

Click: Return to top of page or Index page

Some Family History

Although not too many may know it, I am from London, as was my father and his father before him. My paternal grandfather was in the light infantry during World War I and caught, but was injured so at the end of the war his return to England was delayed. As a result he survived when others did not. My father was born in 1919 in London and he sadly passed away this day in 1989. I understand that he first got a job in W.H. Smith’s in London, my estimate would be around 1935. He was in the army during the World War II but was injured during his training, so he stayed in England and was involved with munitions, that much I knew though he never talked about his work apart from the odd funny stories. He did say that he was taught to drive, that he went to  (I think) Stranraer for a short while and also from what little he did say about that place, I don’t think he enjoyed his visit. After the war, on leaving the army he would have returned to W.H.Smith’s in London. In the meantime my mother was born in 1921, also in London, but my research shows her family, including her older brothers, came from Cornwall, probably working in the tin mines but they then moved for a while to South Wales, then Suffolk and finally London. My mother was determined to work in an office rather than in a local factory, so she got herself a job in W.H. Smith’s, which is where my parents met. Knowing more than a bit about munitions, my father told my mother about the different bombs, but sadly she was badly injured when a delayed-action bomb exploded nearby to where she and her mother were. My mother managed to protect her Mum, except the explosion damaged my mother’s back and she was initially told she would never walk again. At that time, my parents weren’t yet married so my mother told my father to leave her as “he wouldn’t want to be married to a cripple”. My father then said “I’m not going to marry a cripple, I’m going to marry ‘you’. At which my mother said “In that case, I will walk down the aisle”. Which she did. Quite a determined character, my mother. She bore three children, all boys. My two elder brothers were born in 1942 and 1944, then I arrived a while later in 1953. Mum told me later she had hoped for a girl, but when I turned up they decided that was enough!

3 Station Road, now part of St. Jude’s church.

But it seems that my dad had wanted to be a teacher and thinking about it now it must have been quite a challenge for both Mum and Dad to be bringing up young children during and after the war, but they managed. Then in 1952 they learned that l was on the way, but the Great Smog of that year gave my mother health problems and later she had a stroke. This meant that I was born prematurely. So, knowing they had to get out of London my Dad applied for a few teaching jobs outside of the city and one came up in Whittlesey, near Peterborough. This one came with use of the School House, so at less than a year old I, my two brothers and my parents all moved. But it nearly went wrong as Dad, having travelled up to Peterborough by train for a job interview, then headed over to  Whittlesey and he almost mistook Kings Dyke for Whittlesey, which he did not like the look of! It is quite fortunate that the bus conductress corrected him and kept him on the bus. Soon afterwards my Dad’s parents were approaching retirement age and I am told that, much to Dad’s surprise, his parents moved to Whittlesey! There they became housekeepers for the local vicar, whose eyesight was failing, I never did know how they managed that. But with his parents now in Whittlesey, it did mean that my Mum and Dad felt they had to stay there. I have an idea that Dad had at first seen Whittlesey as a ’stepping stone’ to other places, but it did not happen. Back then, the town had several small schools, like Station Road, Broad Street, West End and Queen Street. Then the junior school was built, this being the Alderman Jacobs Primary School in Drybread Road, Whittlesey, which opened its school gates in September 1960, when the Beatles were just starting out and John F Kennedy was being elected to the White House. So I went to a few of the infant schools, then went to the Alderman Jacobs school in 1961. My two elder brothers were taught by my Dad, but I never was as I was in a different ‘stream’ at that school. I did not pass my eleven-plus, so went to the secondary modern school nearby, whilst those who passed went to the grammar school in March. At school I then learned much, some things I enjoyed better than others, I got on with some teachers much better than others too. I sang, I learned to play the trumpet, I played chess – I even became a ‘chess monitor’ for a while! I stayed on for an extra year at school, leaving at sixteen. As the years passed a few smaller schools closed as one new one opened, Park Lane, and then the Sir Harry Smith school became a Community College. In 1980 New Road Primary School was opened as a co-educational day school for children aged 4 to 11 years and in 2014 the school then became an academy in partnership with Park Lane Primary School. Both schools joined with Sir Harry Smith Community College to form the Aspire Learning Trust in July 2016.

Broad Street school, now a hair salon.

In our early years at the School House in Whittlesey we were living just across the road from the local St. Mary’s church, so we became involved there. To begin with, the sound of the loud church bells scared me as I did not know what they were. But my father sat me down, quietly and calmly explained the sound, also told me that he himself was a bellringer and I was fine with that. I think it is almost a fear of the unknown that can scare us. When in church, my father and one of my elder brothers sang in the choir, so I wanted to do the same. It seems I was a bit young to join, and as my father was choirmaster, not wanting to show favouritism he asked the other members of the choir their opinion. They all said “Give the boy a voice test and the reading test – if he passes, he’s in!”. I did so and I was in. At the time of course I knew nothing of this, but it makes me smile to think of how it must have been. So I began singing, and as I grew older many of my fellow choristers stopped as their voices ‘broke’. Interestingly my voice didn’t, it simply ‘slid’ as I found I could no longer reach the high notes. So I went from treble to alto to tenor and finally bass! Meanwhile at school there were a few folk beginning to play musical instruments but because of my physical disability, namely using my right hand, I could not manage things like a recorder. I managed a harmonica, but then at secondary school found I could play a cornet. Admittedly I played it ‘incorrectly’, by pressing the valves left-handed, but it worked. This enabled me to join a band, Nassington Brass, which I did for a while, playing a trumpet, but then I was almost ‘ordered’ that I needed to give up singing and play only in the band. But I found I was enjoying singing in a choir far more and I was better at that, so I gave up band work. In the next few years I joined a couple of other choirs, including a male voice choir but I was also picked up by another choir after impromptu voice test by Henk Kamminga, the master of a quite small choir. That really was hard work, but I now sang in various cathedrals around England, where we led the Saturday and Sunday services. Hard work, but rewarding.

Park Inn by Radisson. Formerly Peterborough Telephone Manager’s Office.

I was still working for British Telecom in the Sales department in Peterborough, but as I have said previously my face did not ‘fit’ there, so to gain promotion I got a transfer to Leicester. It meant travelling by train for a few months and during that time I met a lovely lady and we were married for a while. Further transfers and relocations within BT brought me to office work in Nottingham, up to Sheffield, down to Birmingham but eventually I found the travelling simply too much, so agreed a transfer back up to Sheffield. Each move was good, as it expanded my knowledge and the move up to Sheffield gave me the opportunity to go on a Trainer’s course, which I enjoyed. The only side to that was a year later when the department got relocated to Manchester, as the people who were still in Sheffield were given alternative work, which I do not think they enjoyed as much and they perhaps saw me as being part of the unwanted change, even though I had no choice in the decision. Also, the relocation meant I got to help train the new folk on their work in Manchester, so I was not appreciated in Sheffield! I got a transfer to Leicester, where I found I was back working with people I knew from years ago. But time passed, and after 38 years with BT I was shown the door and I left, along with a few others. After a while I then started my own business, teaching people mainly my age and above how to take basic photographs as well as sharing them on the internet with family and friends. But my health deteriorated still further, I also caught Covid-19 and am now residing in a lovely Care home in Leicester, which suits me nicely. Both my parents have passed away, I’ve lost touch with one brother but still have intermittent contact with the other. I have regular contact with good friends and I am content with the research and writing I now do.

This week… a quote I like.

“Do the right thing, because it’s the right thing to do.”

Click: Return to top of page or Index page

Ely

Ely (pronounced Ee-lee) is a cathedral city in the East Cambridgeshire district of Cambridgeshire, England, about fourteen miles (23 km) north-northeast of Cambridge and eighty miles (130 km) from London. Ely is built on a twenty—three square-mile (60 km2) Kimmeridge Clay island which, at eighty-five feet (26 m), is the highest land in the Fens. It was due to this topography that Ely was not waterlogged like the surrounding Fenland, and was an island separated from the mainland. Major rivers, including the Witham, Welland, Nene and Great Ouse feed into the Fens and, until draining commenced in the eighteenth century, formed freshwater marshes and meres within which peat was laid down. Once the Fens were drained, this peat created a rich and fertile soil ideal for farming. The Great Ouse river was a significant means of transport until the Fens were drained in the seventeenth century and Ely ceased to be an island. The river is now a popular boating spot, and has a large marina. Although now surrounded by land, the city is still recognised and remembered as “The Isle of Ely”. There are two Sites of Special Scientific Interest in the city, these being a former Kimmeridge Clay quarry, and one of the United Kingdom’s best remaining examples of medieval ‘ridge and furrow’ agriculture. The economy of the region is mainly agricultural and before the Fens were drained, eel fishing was an important activity, from which the settlement’s name may have been derived. Other important activities included wildfowling, peat extraction, and the harvesting of both osier (willow) and sedge (rush). The city had been the centre of local pottery production for more than 700 years, including pottery known as Babylon ware. A Roman road, Akeman Street, passes through the city with the southern end of this at Ermine Street near Wimpole and its northern end is at Brancaster. Little direct evidence of Roman occupation in Ely exists, although there are nearby Roman settlements such as those at Little Thetford and Stretham. A coach route, known to have existed in 1753 between Ely and Cambridge, was improved in 1769 as a toll road or turnpike. The present-day A10 closely follows this route whilst Ely railway station, built in 1845, is on the Fen line and is now a railway hub, with lines north to King’s Lynn, north-west to Peterborough, east to Norwich, south-east to Ipswich and south to Cambridge then London. King Henry II granted the first annual fair, Saint Etheldreda’s (or Saint Audrey’s) seven-day event, to the abbot and convent on 10 October 1189. The word “tawdry” originates from cheap lace sold at this fair and a weekly market has taken place in Ely market square since at least the thirteenth century. Markets are now held on Thursdays, Saturdays and Sundays, with a farmers’ market on the second and fourth Saturdays of each month. Present-day annual events include the Eel Festival in May, established in 2004, and a fireworks display in Ely Park, first staged in 1974. The city of Ely has been twinned with Denmark’s oldest town, Ribe, since 1956 and Ely City Football Club was formed in 1885.The nearby Roswell Pits are a palaeontologically significant Site of Special Scientific Interest, being just one mile (1.6 km) north-east of the city. The Jurassic Kimmeridge Clays were quarried in the nineteenth and twentieth centuries for the production of pottery and for maintenance of river embankments. Many specimens of extinct marine life have been found during quarrying. There is some scattered evidence of Late Mesolithic to Bronze Age activity in Ely such as Neolithic flint tools, a Bronze Age axe and spearhead. There is also some slightly denser Iron Age and Roman activity, with some evidence of at least seasonal occupation. For example, a possible farmstead, of the late Iron Age to early Roman period, was discovered at West Fen Road and some Roman pottery was found close to the east end of the cathedral on The Paddock. There was also a Roman settlement, including a tile kiln built over an earlier Iron Age settlement, in Little Thetford, three miles (5 km) to the south.

Earliest known map of Ely by John Speed, 1610.

The origin and meaning of Ely’s name have always been regarded as obscure by place-name scholars, and these are still disputed. The earliest record of the name is in the Latin text of Bede’s ‘Ecclesiastical History of the English People’, where he wrote ‘Elge’. This is apparently not a Latin name, and subsequent Latin texts nearly all used the forms Elia, Eli, or Heli with inorganic H-. In Old English charters, and in the Anglo-Saxon Chronicle, the spelling is usually ‘Elig’. It has been said that the name Ely was derived from Old Northumbrian ‘ēlġē, meaning ‘district of eels’. The theory is that the name then developed a vowel to become ‘ēliġē’, and was afterwards re-interpreted to mean ‘Eel Island’ or Isle of Eels. The city’s origins lay in the foundation of an abbey in 673 AD, one mile (1.6 km) to the north of the village of Cratendune on the Isle of Ely, under the protection of Saint Etheldreda, daughter of King Anna. St Etheldreda (also known as Æthelthryth) was a queen, founder and abbess of Ely. She built a monastery in 673 AD, on the site of what is now Ely Cathedral. This first abbey was destroyed in 870 AD by Danish invaders and rededicated to Etheldreda in 970 AD by Ethelwold, Bishop of Winchester. The abbots of Ely then accumulated such wealth in the region that in the Domesday survey of 1086 it was the “second richest monastery in England”. The first Norman bishop, Simeon, started building the cathedral in 1083. The octagon was rebuilt by sacrist Alan of Walsingham between 1322 and 1328 after the collapse of the original nave crossing on 22 February 1322. Ely’s octagon is now considered “one of the wonders of the medieval world”. Building continued until the dissolution of the abbey in 1539 during the Reformation. The cathedral was then sympathetically restored between 1845 and 1870. As the seat of a diocese, Ely has long been considered a city and in 1974, city status was granted by royal charter.

East aspect of St Mary’s vicarage, a Grade II listed building. Oliver Cromwell lived there between 1638 and 1646, and since 1990 the building has been open as the Oliver Cromwell’s House tourist attraction as well as Ely’s tourist information centre.

Cherry Hill is the site of Ely Castle which is of Norman construction and is a United Kingdom scheduled monument. Of similar construction to Cambridge Castle, the 250-foot (76 m) diameter, 40 feet (12 m) high citadel-type ‘motte and bailey’ and is thought to be a royal defence built by William the Conqueror following submission of the Isle from rebels such as the Earl Morcar and Hereward the Wake. This would date the first building of the castle to c. 1070. On 9 April 1224, although Ely had been a trading centre prior to this, King Henry III of England granted a market to the Bishop of Ely using “letters close”, a type of obsolete legal document once used by the Pope, the British monarchy and by certain officers of government, which is a sealed letter granting a right, monopoly, title, or status to an individual or to some entity such as a corporation. Present weekly market days are Thursday and Saturday and seasonal markets are held monthly on Sundays and Bank Holiday Mondays from Easter to November. Following the accession of Mary I of England to the throne in 1553, the papacy made its first effective efforts to enforce the Pope Paul III-initiated Catholic reforms in England. During this time, which became known as the Marian Persecutions, two men from Wisbech were accused of not believing that the body and blood of Christ were present in the bread and wine of the sacrament of mass. This was viewed as Christian heresy and they were condemned and were burnt at the stake. The cathedral itself was dedicated to St Peter at this time and a windmill is shown on Mount Hill where the post-conquest Ely Castle motte and bailey  once stood. In the eighteenth century trees were planted on Mount Hill which was named Cherry Hill since at least since 1821. There was a form of early workhouse in 1687, perhaps at St Mary’s and then a purpose-built workhouse was erected in 1725 for thirty-five inmates on what is now St Mary’s Court. Four other workhouses existed, including Holy Trinity on Fore Hill for eighty inmates (1738–1956) and the Ely Union workhouse, built in 1837, which housed up to three hundred inmates. The latter became Tower Hospital in 1948 and is now a residential building, Tower Court. Two other former workhouses were the Haven Quayside for unmarried mothers and another on the site of what is now the Hereward Hall in Silver Street. Over the years various diarists have written about Ely, for example Daniel Defoe, when writing in the Eastern Counties section of ‘A tour thro’ the whole island of Great Britain’ (1722), wrote “to Ely, whose cathedral, standing in a level flat country, is seen far and wide … that some of it is so antient, totters so much with every gust of wind, looks so like a decay, and seems so near it, that when ever it does fall, all that ’tis likely will be thought strange in it, will be, that it did not fall a hundred years sooner”. The Ely and Littleport riots occurred between 22 and 24 May 1816. At the Special Commission assizes, held at Ely between 17 and 22 June 1816, twenty-four rioters were condemned. Nineteen had their sentences variously commuted from penal transportation for life to twelve-months imprisonment; the remaining five were executed on 28 June 1816. I have also seen it noted that an outbreak of cholera isolated Ely in 1832.

The Market Place, Ely, a pencil and watercolour by W. W. Collins published 1908.

The above shows the north-east aspect of [Ely Cathedral in the background with the Almonry which is now a restaurant and art gallery. In front of that is the 1847 corn exchange building, now demolished, to the right of the picture. Ely Cathedral was “the first great cathedral to be thoroughly restored”. Work commenced in 1845 and was completed nearly thirty years later; most of the work was “sympathetically” carried out by the architect George Gilbert Scott. The only pavement labyrinth to be found in an English cathedral was installed below the west tower in 1870. For over 800 years the cathedral and its associated buildings, built on an elevation sixty-eight feet (21 m) above the nearby fens, have visually influenced the city and its surrounding area. The abbey at Ely was one of many which were re-founded in the Benedictine reforms of King Edgar the Peaceful (943–975). The “special and peculiarly ancient” honour and freedoms given to Ely by charter at that time may have been intended to award only fiscal privilege, but have been interpreted to confer on subsequent bishops the authority and power of a ruler. These rights were reconfirmed in charters granted by King Edward the Confessor and in King William the Conqueror’s confirmation of the old English liberty at Kenford. The Isle of Ely was mentioned in some statutes as a county palatine; this provided an explanation of the bishop’s royal privileges and judicial authority, which would normally belong to the sovereign; but some legal authorities did not completely endorse the form of words. These bishop’s rights were not fully extinguished until 1837.

OpenMap of Ely demonstrating the city boundary and environs.

As the seat of a diocese, Ely has long been considered a city, holding the status by ancient prescriptive right. When Ely was given a Local Board of Health by Queen Victoria in 1850, the order creating the board said it was to cover the “city of Ely”. The local board which governed the city from 1850 to 1894 called itself “City of Ely Local Board”, and the urban district council which replaced it and governed the city from 1894 to 1974 similarly called itself “City of Ely Urban District Council”. Ely’s city status was not explicitly confirmed, however, until 1 April 1974 when Queen Elizabeth II granted letters patent, to its civil parish. Ely’s population of 20,256 (as recorded in 2011) classifies it as one of the smallest cities in England, although the population has increased noticeably since 1991 when it was recorded at 11,291. Its urban area brings Ely into the top ten of smallest sized cities (1.84 square miles, 4.77 km2), but by city council area it is much larger in coverage (22.86 square miles, 59.21 km2) than many others.

Sessions House (formerly Shire Hall), Lynn Road. Courthouse, built 1821. Since 2013 this has been the headquarters of City of Ely Council.

There are three tiers of local government covering Ely, at parish (city), district, and county level, these being the City of Ely Council, East Cambridgeshire District Council and [Cambridgeshire County Council. In terms of its administrative history, the city was governed by a local board from 1850 until 1894, when it became the City of Ely Urban District Council, which then operated from 1894 to 1974. The Isle of Ely County Council governed the Isle of Ely administrative county that surrounding and included the city from 1889 to 1965. In 1965 there was a reform of local government that merged the county council with that of Cambridgeshire to form the Cambridgeshire and Isle of Ely County Council. In 1974 as part of a national reform of local government, the Cambridgeshire and Isle of Ely County Council merged with the Huntingdon and Peterborough County Council to form the [Cambridgeshire County Council and it was sadly around this time that Whittlesey, where I grew up, ceased to be part of the Isle of Ely. The City of Ely Urban District Council became the City of Ely Council, a parish council which covers the same area but with fewer powers, and the East Cambridgeshire District Council which covers a wider area.

A 1648 drainage map showing the Isle of Ely still surrounded by water. (Joan Blaeu (1648) ‘Regiones Inundatae’.

The west of Cambridgeshire is made up of limestones from the Jurassic period, whilst the east Cambridgeshire area consists of Cretaceous, upper Mesozoic chalks known locally as clunch’. In between these two major formations, the high ground forming the Isle of Ely is from a lower division Cretaceous system known as Lower Greensand which is capped by Boulder Clay and all local settlements, such as Stretham and Littleport, are on similar islands. These islands rise above the surrounding flat land which forms the largest plain of Britain from the Jurassic system of partly consolidated clays or muds. Kimmeridge Clay beds dipping gently west underly the Lower Greensand of the area exposed, for example, about one mile (2 km) south of Ely in the Roswell Pits. The Lower Greensand is partly capped by glacial deposits forming the highest point in East Cambridgeshire, rising to 85 feet (26 m) above sea level in Ely. The low-lying fens surrounding the island of Ely were formed, prior to the seventeenth century, by alternate fresh-water and sea-water incursions. Major rivers in the region drain an area of some 6,000 square miles (16,000 km2), five times larger than the fens, into the basin that forms the fens. Peat formed in the fresh-water swamps and meres, whilst silts were deposited by the slow-moving sea-water. The Earl of Bedford, supported by Parliament, financed the draining of the fens during the seventeenth century, led by the Dutch engineer Cornelius Vermuyden and the fens continue to be drained to this day. As an island surrounded by marshes and meres, the fishing of eels was important as both a food and an income for the abbot and his nearby tenants. Prior to the extensive and largely successful drainage of the fens during the seventeenth century, Ely was a trade centre for goods made out of willow, reeds and rushes and wild fowling was a major local activity. Peat in the form of “turf” was used as a fuel and in the form of “moor” as a building material. Ampthill Clay was dug from the local area for the maintenance of river banks and Kimmeridge Clay at Roswell Pits for the making of pottery wares. In general, from a geological perspective, “The district is almost entirely agricultural and has always been so. The only mineral worked at the present time is gravel for aggregate, although chalk, brick clay (Ampthill and Kimmeridge clays), phosphate (from Woburn Sands, Gault and Cambridge Greensand), sand and gravel, and peat have been worked on a small scale in the past”. Phosphate nodules, referred to locally as coprolites were dug in the area surrounding Ely between 1850 and 1890 for use as an [agricultural fertiliser and this industry provided significant employment for the local labour force. One of the largest sugar beet factories in England was opened two miles (3 km) from the centre of Ely, in 1925. The factory closed in 1981, although sugar beet is still farmed locally. Pottery was made in Ely from the twelfth century until 1860 and records show around eighty people who classed their trade as potters. “Babylon ware” is the name given to pottery made in one area of Ely. This ware is thought to be so named because there were potters in an area cut off from the centre by the re-routing of the Great Ouse river around 1200, by the seventeenth century this area had become known as Babylon. Although the reason for the name is unclear, by 1850 it was in official use on maps. The building of the Ely to King’s Lynn railway in 1847 cut the area off even further, and the inhabitants could only cross to Ely by boat.

Eel Day carnival procession down Fore Hill, 2007.

Annual fairs have been held in Ely since the twelfth century. Apart from Saint Audrey’s (Etheldreda) seven-day fair, held either side of 23 June, which was first granted officially by King Henry I to the abbot and convent on 10 October 1189, two other fairs, the 15‑day festival of St Lambert, first granted in 1312 and the 22‑day fair beginning on the Vigil of the Ascension, first granted in 1318. Have been held. The festival of St Lambert had stopped by the eighteenth century. St Etheldreda’s and the Vigil of the Ascension markets still continue, although the number of days have been considerably reduced and the dates have changed. Present-day annual events in Ely include Aquafest, which has been staged at the riverside by the Rotary Club on the first Sunday of July since 1978. Other events include the Eel Day carnival procession and the annual fireworks display in Ely Park, first staged in 1974. The Ely Folk Festival has been held in the city since 1985 and the Ely Horticultural Society have been staging their Great Autumn Show since 1927. Interestingly, the children’s book ‘Tom’s Midnight Garden’ by Philippa Pearce is partly set in Ely and includes a scene in Ely Cathedral and scenes inspired by the author’s father’s own childhood experiences of skating along the frozen river from Cambridge to Ely in the frost of 1894–95. Also the first series of Jim Kelly’s crime novels, featuring journalist Philip Dryden, is largely set in the author’s home town of Ely and in the Fens. Graham Swift’s 1983 novel ‘Waterland’ takes place, and recounts several historical events, in and around the town of Ely. The Tales of the Unexpected episode “The Flypaper” was filmed in Ely and the album cover for Pink Floyd’s ‘The Division Bell’ was created by Storm Thorgerson with Ely as the background between two massive sculptures that he had erected outside the city.

This week… A quote from an army officer:

Him? “I wouldn’t trust him with a pea shooter!”

Click: Return to top of page or Index page

Routine

So many things in this world and beyond are governed by routine. That is normal. Yes, things change, in fact that is one of my favourite sayings, that change is the one constant in this universe! But so much of it changes slowly. At least it seems to us, although there are days when we find time seems to have passed by in a flash whilst at other times it simply drags. However, it goes by at the same, steady rate. Whether it be just our world, our planets, our solar system, the Milky Way and so much more, it seems a regular routine. Then, all of a sudden, something occurs and we see change. Like for example where NASA’s Cassini spacecraft has provided the first direct evidence of small meteoroids breaking into streams of rubble and crashing into Saturn’s rings. These observations make Saturn’s rings the only location besides Earth, the moon, and Jupiter where scientists and amateur astronomers have been able to observe impacts as they occur. Studying the impact rate of meteoroids from outside the Saturn system helps scientists understand how different planet systems in the solar system formed. Our solar system is full of small, speeding objects. Planetary bodies are frequently pummelled by them. The meteoroids at Saturn range from about one-half inch to several yards (1 centimetre to several metres) in size. It took scientists years to distinguish tracks left by nine meteoroids in 2005, 2009 and 2012! As to our everyday life, daily life or routine life this comprises the ways in which we typically act, think, and feel on a daily basis. In some forms of life, routine is governed by the seasons, where some animals hibernate, whilst in other creatures their lifespan is relatively short and their lifetime is faster. In many lives on Earth we see ‘diurnality’, a form of plant and animal behaviour characterised by activity during daytime, with a period of sleeping or other inactivity at night. The timing of activity by an animal really depends on a variety of environmental factors such as the temperature, the ability to gather food by sight, the risk of predation, and the time of year. Diurnality is a cycle of activity within a 24-hour period, whilst cyclic activities called circadian rhythms are endogenous cycles not dependent on external cues or environmental factors. Animals active during twilight are ‘crepuscular’, those active during the night are nocturnal and animals active at sporadic times during both night and day are ‘cathemeral’, a term I have never heard of before. Plants that open their flowers during the daytime are described as diurnal, whilst those that bloom during nighttime are nocturnal. The timing of flower opening is often related to the time at which preferred pollinators are foraging. For example, sunflowers open during the day to attract bees, whereas the night-blooming cereus opens at night to attract large sphinx moths. Again, another I did not previously know. Human diurnality means most people sleep at least part of the night and are active in daytime. Most eat two or three meals in a day. Working time, apart from shift work, mostly involves a daily schedule, beginning in the morning. This produces the daily ‘rush hours’ experienced by many millions, and the drive time focused on by radio broadcasters. Evening is often leisure time. Beyond these broad similarities, lifestyles vary and different people spend their days differently. For example, nomadic life differs from sedentary ways amongst the more urban people who live differently from rural folk. In addition, differences in the lives of the rich and the poor, or between labourers and intellectuals, which may go beyond their ‘regular’ working hours. In addition, children and adults vary in what they do each day as their need for sleep changes. In the study of everyday life, gender has been an important factor in its conceptions. Much of everyday life is automatic in that it is driven by current environmental features. Daily life is also studied by sociologists to investigate how it is organised and given meaning. At one time daily entertainment consisted mainly of telling stories in the evening. This custom developed into the theatres of ancient Greece and other professional entertainments. Later, reading became less of a mysterious speciality of scholars and more a common pleasure for people who could read. As time passed, different forms of media became available to more people. Different media forms serve different purposes in the everyday lives of different people, giving them the opportunities to make choices about what forms of media they choose, such as watching television, using the Internet, listening to the radio or reading newspapers as well as magazines. In many cases these help them to accomplish their tasks most effectively, but a great many use them as forms of relaxation as well as learning.

Booked!

Our everyday lives are shaped through language and communication. We choose what to do with our time based on opinions and ideals formed through the discourse we are exposed to. Much of the dialogue people are subject to comes from the mass media, which is an important factor in what shapes human experience. The media uses language to make an impact on our everyday life, whether that be as small as helping to decide where to eat or as big as choosing a representative in government. Interestingly, to improve our everyday life one professor in a Department of Communication and Culture says people should seek to understand the rhetoric that so often and unnoticeably changes their lives. They write that “…rhetoric enables us to make connections… It is about understanding how we engage with the world”. We engage in activities of daily living, a term used in healthcare to refer to daily self care activities within an individual’s place of residence, in outdoor environments, or both. Healthcare professionals routinely refer to the ability or inability to perform these as a measurement of the functional status of a person, particularly in regard to people with disabilities and the elderly. These events are defined as “the things we normally do…such as feeding ourselves, bathing, dressing, grooming, work, homemaking, and leisure”. The ability and the extent to which the elderly can perform these activities is at the focus of ‘gerontology’ another new word to me but which is the study of the social, cultural, psychological, cognitive and biological aspects of aging, along with relative understandings of later life. Indeed, we hear so much in the news about Care homes and that they have changed quite a bit, mainly for the better I am happy to say. This is something that I am planning to research and write a little more about in the future.

This week… Zones.

There are four ‘zones’, that we humans have around us.

Intimate zone: less than 0.5 metres (1.5 feet)
Personal zone: 0.5 to 1.5 metres (1.5 to 4 feet)
Social zone: 1.5 to 3 metres (4 to 12 feet)
Public zone: greater than 3 metres (12 feet)

Fascinating!

Click: Return to top of page or Index page

Tennis

As I guess most people know, tennis is a racket sport played either individually against a single opponent (singles) or between two teams of two players each (doubles). But it has an interesting history. Each player uses a tennis racket that is strung with cord to strike a hollow rubber ball covered with felt over or around a net and into the opponent’s side of the court. The object of the game is to manoeuvre the ball in such a way that the opponent is not able to play a valid return. The player who is unable to return the ball validly will not gain a point, whilst the opposite player will. Tennis is an Olympic sport and is played at all levels of society and at all ages. The sport can be played by anyone who can hold a racket, including wheelchair users. The original forms of tennis developed in France during the late Middle Ages, but the modern form of tennis originated in Birmingham, England, in the late nineteenth century as ‘lawn tennis’. It had close connections both to various field (lawn) games such as croquet and bowls as well as to the older racket sport today called ‘real tennis’, which is one of several games sometimes called “the sport of kings” and is the original racquet sport from which the modern game of tennis is derived. It is also known as ‘court tennis’ in the United States, formerly ‘royal tennis’ in England and Australia, or ‘courte-paume’ in France, to distinguish it from their ‘longue-paume’, and in reference to the older, racquetless game of ‘jeu de paume’, the ancestor of modern handball and racquet games. The rules of modern tennis have changed little since the 1890s, with just two exceptions which are that until 1961 the server had to keep one foot on the ground at all times, and the adoption of the tie-break in the 1970s. A recent addition to professional tennis has been the adoption of electronic review technology coupled with a point-challenge system, which allows a player to contest the line call of a point, a system known as Hawk-Eye and seen as quite an appropriate name. Tennis is played by millions of recreational players and is a popular worldwide spectator sport. The four Grand Slam tournaments (also referred to as the majors) are especially popular, these being the Australian Open, played on hard courts; the French Open, played on red clay courts; Wimbledon, played on grass courts and the United States Open, also played on hard courts.

A painting from Cremona from the end of the sixteenth century.
‘Jeu de paume’ in the seventeenth century.

Historians believe that the game’s ancient origin lay in twelfth-century northern France, where a ball was struck with the palm of the hand. King Louis X of France was a keen player of ‘jeu de paume’ (game of the palm), which evolved into real tennis. However, the King was unhappy with playing tennis outdoors and accordingly had indoor, enclosed courts made in Paris around the end of the thirteenth century and in due course this design spread across royal palaces all over Europe. In June 1316 at Vincennes, Val-de-Marne, and following a particularly exhausting game, King Louis drank a large quantity of cooled wine and subsequently died of either pneumonia or pleurisy, although there was also suspicion of poisoning. Because of the contemporary accounts of his death, King Louis X is history’s first tennis player known by name. Another of the early enthusiasts of the game was King Charles V of France, who had a court set up at the Louvre Palace. It was not until the sixteenth century that rackets came into use and the game began to be called ‘tennis’, from the French term ‘tenez’, which can be translated as ‘hold!’, ‘receive!’ or ‘take!’, an interjection used as a call from the server to his opponent. It was popular in England and France, although the game was only played indoors, where the ball could be hit off the wall. Henry VIII of England was a big fan of this game, which is now known as ‘real tennis’. An epitaph in St Michael’s Church, Coventry, written c. 1705, reads, in part:
Here lyes an old toss’d Tennis Ball:
Was racketted, from spring to fall,
With so much heat and so much hast,
Time’s arm for shame grew tyred at last.

During the eighteenth and early nineteenth centuries, as real tennis declined, new racket sports emerged in England. The invention of the first lawn mower in Britain in 1830 is believed to have been a catalyst for the preparation of modern-style grass courts, sporting ovals, playing fields, pitches, greens, etc. This in turn led to the codification of modern rules for many sports, including lawn tennis, most football codes, lawn bowls and others.

Augurio Perera’s house in Edgbaston, Birmingham, where he and Harry Gem first played the modern game of lawn tennis.

Between 1859 and 1865 Harry Gem, a solicitor, and his friend Augurio Perera developed a game that combined elements of racquets and the Basque ball game ‘Pelota’, which they played on Perera’s croquet lawn in Edgbaston, Birmingham and in 1872, along with two local doctors, they founded the world’s first tennis club on Avenue Road, Leamington Spa. This is where ‘lawn tennis’ was used as the name of an activity by a club for the first time. In December 1874, a British army officer, Walter Clopton Wingfield, wrote to Harry Gem, commenting that he (Wingfield) had been experimenting with his version of lawn tennis “for a year and a half”. In December 1873, Wingfield designed and patented a game which he called ‘sphairistikè’, meaning ‘ball-playing’, and which was soon known simply as ‘sticky’, for the amusement of guests at a garden party on his friend’s estate of Nantclwyd Hall, in Llanelidan, Wales. According to Honor Godfrey, museum curator at Wimbledon, Wingfield popularized this game enormously. He produced a boxed set which included a net, poles, rackets, balls for playing the game, and most importantly you had his rules. He was absolutely terrific at marketing and he sent his game all over the world. He had very good connections with the clergy, the law profession, and the aristocracy and he sent thousands of sets out in the first year or so, in 1874. The world’s oldest annual tennis tournament took place at Leamington Lawn Tennis Club in Birmingham in 1874 and this was three years before the All England Lawn Tennis and Croquet Club would hold its first championships at Wimbledon, in 1877. The first Championships culminated in a significant debate on how to standardise the rules.

Tennis doubles final at the 1896 Olympic Games.

Tennis became popular in France, where the French Championships date back to 1891, although until 1925 they were open only to tennis players who were members of French clubs. Thus, Wimbledon, the US Open, the French Open and the Australian Open (dating to 1905) became and have remained the most prestigious events in tennis. Together, these four events are called the Majors or ‘Slams’, a term borrowed from bridge.

Lawn tennis in Canada, c. 1900.

In 1913, the International Lawn Tennis Federation (ILTF), now the International Tennis Federation (ITF), was founded and established three official tournaments as the major championships of the day. The World Grass Court Championships were awarded to Great Britain, the World Hard Court Championships were awarded to France as the term ‘hard court’ was used for clay courts at the time. Some tournaments were held in Belgium instead. The World Covered Court Championships for indoor courts were awarded annually, as Sweden, France, Great Britain, Denmark, Switzerland and Spain each hosted the tournament. At a meeting held on 16 March 1923 in Paris, the title ‘World Championship’ was dropped and a new category of ‘Official Championship’ was created for events in Great Britain, France, the US and Australia, these being today’s Grand Slam events. The impact on the four recipient nations to replace the ‘world championships’ with ‘official championships’ was simple in a general sense as each became a major nation of the federation with enhanced voting power, and each now operated a major event. The comprehensive rules promulgated in 1924 by the ILTF have remained largely stable in the ensuing years, the one major change being the addition of the ‘tiebreak’ system. That same year, tennis withdrew from the Olympics after the 1924 Games but returned sixty years later as a 21-and-under demonstration event in 1984. The success of the event was overwhelming, and the IOC decided to reintroduce tennis as a full-medal sport at Seoul in 1988. The Davis Cup, an annual competition between men’s national teams, dates to 1900. The analogous competition for women’s national teams, the Fed Cup, was founded as the Federation Cup in 1963 to celebrate the 50th anniversary of the founding of the ITF. In 1926, a promoter established the first professional tennis tour with a group of American and French tennis players playing exhibition matches to paying audiences. As a result, players who turned ‘Pro’, were no longer permitted to compete in the major (amateur) tournaments. In 1968, commercial pressures and rumours of some amateurs taking money under the table led to the abandonment of this distinction, inaugurating the Open Era, in which all players could compete in all tournaments, and top players were able to make their living from tennis. With the beginning of the Open Era, the establishment of an international professional tennis circuit, and revenues from the sale of television rights, tennis’s popularity has spread worldwide, and the sport has shed its middle-class English-speaking image, although it is acknowledged that this stereotype still exists.

Racket of Franjo Punčec in a wooden frame dating to the late 1930s.

Part of the appeal of tennis stems from the simplicity of equipment required for play. Beginners need only a racket and balls. The components of a tennis racket include a handle, known as the grip, connected to a neck which joins a roughly elliptical frame that holds a matrix of tightly pulled strings. For the first 100 years of the modern game, rackets were made of wood and of standard size, and strings were of animal gut. Laminated wood construction yielded more strength in rackets used through most of the twentieth century until first metal and then composites of carbon graphite, ceramics, and lighter metals such as titanium were introduced. These stronger materials enabled the production of oversized rackets that yielded yet more power. Meanwhile, technology led to the use of synthetic strings that match the feel of gut yet with added durability. Under modern rules of tennis, the rackets must adhere to a range of guidelines, these being that the hitting area, composed of the strings, must be flat and generally uniform, the frame of the hitting area may not be more than 29 inches (74cm) in length and 12.5 inches (32cm) in width, the entire racket must be of a fixed shape, size, weight, and weight distribution and there may not be any energy source built into the rackets. Also the rackets must not provide any kind of communication, instruction or advice to the player during the match. The rules regarding rackets have changed over time, as material and engineering advances have been made. For example, the maximum length of the frame had been 32 inches (81cm) until 1997, when it was shortened to 29 inches (74cm).

Two different tennis strings of lengths 12m (left), and 200 m (right).

There are multiple types of tennis strings, including natural gut and synthetic stings made from materials such as nylon, kevlar or polyester. The first type of tennis strings available were natural gut strings and they were the only type used until synthetic strings were introduced in the 1950s. Natural gut strings are still used frequently by players such as Roger Federer. They are made from cow intestines, provide increased power and are easier on the arm than most strings.

As you can see, a modern tennis racket and balls.

Tennis balls were originally made of cloth strips stitched together with thread and stuffed with feathers. Modern tennis balls are made of hollow vulcanized rubber with a felt coating. Traditionally white, the predominant colour was gradually changed to optic yellow in the latter part of the twentieth century to allow for improved visibility. Tennis balls must conform to certain criteria for size, weight, deformation and bounce to be approved for regulation play. The International Tennis Federation (ITF) defines the official diameter as 65.41–68.58mm (2.575–2.700in). Balls must weigh between 56.0 and 59.4g (1.98 and 2.10oz). Although the process of producing the balls has remained virtually unchanged for the past one hundred years, the majority of manufacturing now takes place in the Far East and the relocation is due to cheaper labour costs and materials in the region. Tournaments that are played under the ITF Rules of Tennis must use balls that are approved by the International Tennis Federation (ITF) and be named on the official ITF list of approved tennis balls.

The dimensions of a tennis court.

The game of tennis is played on a rectangular, flat surface. The court is 78 feet (23.77m) long, and 27 feet (8.2m) wide for singles matches and 36 feet (11m) for doubles matches and additional clear space around the court is required in order for players to reach overrun balls. A net is stretched across the full width of the court, parallel with the baselines, dividing it into two equal ends. The net is held up by either a cord or metal cable of diameter no greater than 1/3 inch (0.8cm). The net is 3 feet 6 inches (1.07 m) high at the posts and 3 feet (0.91 m) high in the centre. The net posts are 3 feet (0.91m) outside the doubles court on each side or, for a singles net, 3 feet (0.91m) outside the singles court on each side. Tennis is unusual in that it is played on a variety of surfaces, these being grass, clay and hard courts of concrete or asphalt topped with acrylic. Occasionally carpet is used for indoor play, with hardwood flooring having been historically used. Artificial turf courts can also be found. The lines that delineate the width of the court are called the baseline (farthest back) and the service line (middle of the court). The short mark in the centre of each baseline is referred to as either the hash mark or the centre mark. The outermost lines that make up the length are called the doubles sidelines; they are the boundaries for doubles matches. The lines to the inside of the doubles sidelines are the singles sidelines, and are the boundaries in singles play. The area between a doubles sideline and the nearest singles sideline is called the doubles alley, playable in doubles play. The line that runs across the centre of a player’s side of the court is called the service line because the serve must be delivered into the area between the service line and the net on the receiving side. Despite its name, this is not where a player legally stands when making a serve. The line dividing the service line in two is called the centre line or centre service line. The boxes this centre line creates are called the service boxes; depending on a player’s position, they have to hit the ball into one of these when serving. A ball is only ‘out’ if none of it has hit the area inside the lines, or the line, upon its first bounce. All lines are required to be between 1 and 2 inches (25 and 51mm) in width, with the exception of the baseline which can be up to 4 inches (100mm) wide, although in practice it is often the same width as the others.

Two players before a serve.

The players or teams start on opposite sides of the net. One player is designated the ‘server’, and the opposing player is the ‘receiver’. The choice to be server or receiver in the first game and the choice of ends is decided by a coin toss before the warm-up starts. Service alternates game by game between the two players or teams. For each point, the server starts behind the baseline, between the centre mark and the sideline. The receiver may start anywhere on their side of the net. When the receiver is ready, the server will serve, although the receiver must play to the pace of the server. For a service to be legal, the ball must travel over the net without touching it into the diagonally opposite service box. If the ball hits the net but lands in the service box, this is a ‘let’ or ‘net service’, which is void, and the server retakes that serve. The player can serve any number of let services in a point and they are always treated as voids and not as faults. A fault is a serve that falls long or wide of the service box, or does not clear the net. There is also a “foot fault” when a player’s foot touches the baseline or an extension of the centre mark before the ball is hit. If the second service, after a fault, is also a fault, the server ‘double faults’, and the receiver wins the point. However, if the serve is in, it is considered a legal service. A legal service starts a ‘rally’, in which the players alternate hitting the ball across the net. A legal return consists of a player hitting the ball so that it falls in the server’s court, before it has bounced twice or hit any fixtures except the net. A player or team cannot hit the ball twice in a row. The ball must travel over or round the net into the other players’ court. A ball that hits the net during a rally is considered a legal return as long as it crosses into the opposite side of the court. The first player or team to fail to make a legal return loses the point. The server then moves to the other side of the service line at the start of a new point.

The scoreboard of a tennis match.

A game consists of a sequence of points played with the same player serving. A game is won by the first player to have won at least four points in total and at least two points more than the opponent. The running score of each game is described in a manner peculiar to tennis: scores from zero to three points are described as ‘love’, ’15’, ’30’ and ’40’, respectively. If at least three points have been scored by each player, making the player’s scores equal at 40 apiece, the score is not called out as ’40–40’, but rather as ‘deuce’. If at least three points have been scored by each side and a player has one more point than his opponent, the score of the game is ‘advantage’ for the player in the lead. The score of a tennis game during play is always read with the serving player’s score first. In tournament play, the chair umpire calls the point count (e.g., ’15–love’) after each point. At the end of a game, the chair umpire also announces the winner of the game and the overall score. Leading on from that are ‘Sets’. A set consists of a sequence of games played with service alternating between games, ending when the count of games won meets certain criteria. Typically, a player wins a set by winning at least six games and at least two games more than the opponent. If one player has won six games and the opponent five, an additional game is played. If the leading player wins that game, the player wins the set 7–5. If the trailing player wins the game (tying the set 6–6) a ‘tiebreak’ is played. A tiebreak, played under a separate set of rules, allows one player to win one more game and thus the set, to give a final set score of 7–6. A tiebreak game can be won by scoring at least seven points and at least two points more than the opponent. In a tiebreak, two players serve by ‘ABBA’ system which has been proven to be fair. If a tiebreak is not played, the set is referred to as an ‘advantage set’, where the set continues without limit until one player leads by a two-game margin. A ‘love set’ means that the loser of the set won zero games, colloquially termed a ‘jam donut’ in the United States. In tournament play, the chair umpire announces the winner of the set and the overall score. The final score in sets is always read with the winning player’s score first, e.g. ‘6–2, 4–6, 6–0, 7–5’. This leads on to a ‘Match’, which consists of a sequence of sets. The outcome is determined through a best of three or five ‘sets’ system. On the professional circuit, men play best-of-five-set matches at all four Grand Slam tournaments, Davis Cup, and the final of the Olympic Games and best-of-three-set matches at all other tournaments, while women play best-of-three-set matches at all tournaments. The first player to win two sets in a best-of-three, or three sets in a best-of-five, wins the match. Only in the final sets of matches at the French Open, the Olympic Games, and Fed Cup are tiebreaks not played. In these cases, sets are played indefinitely until one player has a two-game lead, occasionally leading to some remarkably long matches! In tournament play, the chair umpire announces the end of the match with the well-known phrase ‘Game, set, match’ followed by the winning person’s or team’s name.

Convention dictates that the two players shake hands at the end of a match, first with each other and then with the Umpire.

A tennis match is intended to be continuous. Because stamina is a relevant factor, arbitrary delays are not permitted. In most cases, service is required to occur no more than 20 seconds after the end of the previous point. This is increased to 90 seconds when the players change ends (after every odd-numbered game), and a 2-minute break is permitted between sets. Other than this, breaks are permitted only when forced by events beyond the players’ control, such as rain, damaged footwear, damaged racket, or the need to retrieve an errant ball. Should a player be deemed to be stalling repeatedly, the chair umpire may initially give a warning followed by subsequent penalties of “point”, “game”, and default of the match for the player who is consistently taking longer than the allowed time limit. In the event of a rain delay, darkness or other external conditions halting play, the match is resumed at a later time, with the same score as at the time of the delay, and each player at the same end of the court as when rain halted play, or as close to the same relative compass point if play is resumed on a different court. Balls wear out quickly in serious play and, therefore, in both ATP and WTA tournaments, they are changed after every nine games with the first change occurring after only seven games, because the first set of balls is also used for the pre-match warm-up. In ITF tournaments like Fed Cup, the balls are changed after every eleven games (rather than nine) with the first change occurring after only nine games (instead of seven). An exception is that a ball change may not take place at the beginning of a tiebreaker, in which case the ball change is delayed until the beginning of the second game of the next set.[66] As a courtesy to the receiver, the server will often signal to the receiver before the first serve of the game in which new balls are used as a reminder that they are using new balls. Continuity of the balls’ condition is considered part of the game, so if a re-warm-up is required after an extended break in play (usually due to rain), then the re-warm-up is done using a separate set of balls, and use of the match balls is resumed only when play resumes. Within the game there are different ways regarding the stance of a player, preparing them in order to best be able to return a shot. Essentially, it enables them to move quickly in order to achieve a particular stroke. There are four main stances in modern tennis: open, semi-open, closed, and neutral. All four stances involve the player crouching in some manner: as well as being a more efficient striking posture, it allows them to preload their muscles in order to play the stroke more dynamically. What stance is selected is strongly influenced by shot selection. A player may quickly alter their stance depending on the circumstances and the type of shot they intend to play. Any given stance also alters dramatically based upon the actual playing of the shot with dynamic movements and shifts of body weight occurring. In a similar way, a competent tennis player has eight basic shots in his or her repertoire: the serve, forehand, backhand, volley, half-volley, overhead smash, drop shot, and lob. In addition, there are different ways that a player may grip their racket. I do not however intend to go into greater detail here on such things as the shots which can be played, or about individual players, but I hope this has been a useful insight into aspects of the game that I certainly did not know until I began my research!

This week… one I could not resist.
Did you hear about the tennis player who was not allowed to take out library books about aces? It was because he never returned them.

Click: Return to top of page or Index page