Long before the digital generation came thundering over the horizon, people were fascinated by the idea of recording their surroundings, friends and daily lives as permanent images formed from the action of light on a variety of silver salts. They even took selfies.
Early attempts at capturing these pictures required processes that were complex, challenging and not without a certain risk. The daguerreotype, brainchild of Louis Daguerre in 1839, used a copper plate coated with polished silver as the medium. This was sensitised with iodine vapour to produce a coating of silver iodide. After exposure to light in the camera, the latent image was developed by (wait for it…) holding it over a dish of mercury heated to 60°C – a process that would make today’s lab manager run screaming for their Coshh paperwork.
Daguerreotypes were difficult to view in other than perfect lighting conditions and were almost impossible to copy. The process effectively died out when the collodion, or ‘wet plate’, process was introduced by Frederick Scott Archer in the late 1840s, allowing a glass plate to be used as the substrate. The chemistry was based on a silver nitrate and potassium iodide combination, but still carried some health and safety concerns – potassium cyanide (yes, really) was prized as a developer because it acted faster than some safer alternatives. Although the wet plate process was capable of excellent results, it had to be exposed while the surface was still moist, making field photography complicated.
By the 1870s a successful dry plate process had evolved, thanks to Richard Leach Maddox. Based on silver bromide in a gelatine matrix or emulsion, it was more sensitive to light than the collodion process. Additionally, the sensitised plate could be kept for months before use, which kick-started a new industry for the supply of ready-to-use photographic materials. Once film was being manufactured in bulk, to a consistent standard and at a good price, few photographers bothered to prepare their own – although this remains a niche interest to this day. The chemistry of film development and printing, however, offered those keen to advance their art a whole range of interesting, and occasionally hazardous, recipes to experiment with and customise.
I have in my possession a practical manual from the mid-20th century that details some of the chemical options. Obviously well used, its glossy pages are crisped by darkroom spillages and carry an aroma which, after brief exposure, brings a suspicious tingling to the lips. A run through the list of ingredients for these developers, fixers, hardeners, reducers and intensifiers reveals a cocktail including mercuric chloride, potassium cyanide, uranium nitrate and a range of concentrated acids. These are things I would hesitate to Google for, let alone use in an open lab, and I might wear gloves when handling this volume in future.
Even with the tools of digital photography embedded in every mobile phone, many photographers still feel happiest working with analogue imaging. My hipster friends inform me that monochrome photography is fashionable again and there’s an active second-hand market for film cameras. So how should the chemist keen to dabble in imaging engage with these dark arts? I suggest exploring the under-regarded, retro world of the cyanotype. All you need is some paper, water, a plastic tray, access to ferric ammonium citrate and potassium ferricyanide, plus a modicum of patience. The craft of making a cyanotype is both slow and pleasingly messy, yet the subtle blue-toned prints that result offer a sense of achievement often missing from our digital world. Try it – I think you’ll enjoy it.