Why do a PhD? My thoughts…

A man with a beard in the foreground can be seen with a keyboard in front of him. Children are seen in the background

Digital music workshop during Hastings Creative Summer, 2018. (c) Jon Pratty, MSL Discover

If you follow me on social media (@jon_pratty on Twitter, and jonnypratty on Instagram) you’ll know I’ve been busy taking an MA at Sussex University and I’m now heading towards a PhD project at Sussex, too. Behind the scenes, though, there’s much more to the story.

I’ve been involved in culture and the creative industries since 1985, and studied fine art at college for five years before that, so this is pretty much a life’s work. My career has mostly centred on writing, journalism and digital publishing, with recent activity heading towards programming and delivering creative media events for young people.

I’ve continued to work on side projects and commissions related to museums and galleries, and I’m currently enjoying helping a small north Kent venue, Minster Abbey Gatehouse museum, deliver an HLF-funded digitisation project. I’m also just starting to write regularly for Museums Journal, and a new piece about ethics, museums and Facebook will appear soon.

So why do a PhD? I’m not doing this because I want to have those letters after my name. I suspect some people carry out research because they want to stay in the university environment, but I haven’t personally met anyone who has admitted that. No – pretty much everyone I’ve met at Sussex (and other academic places where I have contacts) is passionate about exploring new ideas and having an impact on the knowledge base they’ve chosen to research.

I’m no different. After working at Arts Council England for five years, and enjoying the opportunity to influence strategy and impact on local arts audiences, I’m interested in how we can begin to help culture in local places sustain and develop in a difficult funding environment.

In my home town, Hastings, it’s difficult to make the case for new kinds of cultural and creative development. It’s a challenging place to be an artist or creative; there are few jobs, there’s a very flat town economy, and in the referendum Hastings voted to leave the EU. All these factors mean that to raise funds to begin new and positive projects, we need to be able to access evidence and case studies that show how to do it, from people who have successfully built and delivered the kind of work I’m interested in.

So one track in my PhD research is about how cultures grow in certain places, and not in others. I want to understand why Bristol is ‘The Playable City,’ and Leeds has Data Mill North because I need to make the case to local civic leaders, funders and stakeholders that investment in socially-centred digital culture projects like Hastings Creative Spring will eventually bring new jobs of the right kind to regional and rural Britain, as well as the Smart Cities of the future. We’re looking to uncover the social and economic justification for investment in creative industries and culture that civic leaders can understand, with evidence to back that up, in terms easily parsed in council discussions.

So that’s the seam of culture, society and place development that I’m keen to explore with my PhD proposal. I’m not sure where it will go, or what the end product will be, other than having a clear focus on the local needs I know I want to answer through the work.

My advantage as a researcher is that I have worked in the creative industries for 30 years, have a multitude of contacts, and know many of the place where hubs, clusters, creative communities and artist network have grown, matured, sustained and in some cases, closed down. I’m expecting to set a methodology that works through oral history interviews with a number of key people across the creative sectors in one major city, Brighton, and then I’ll carry out comparable interviews in other creative clusters to contrast with the Brighton work.

Initially, I intended to express the histories discovered in a graphic family tree and a searchable database, to allow analysis of the development and movement of individuals, creative agencies and companies in the places I’m exploring. This might change as I get further into the project.

In terms of my academic contacts and sector and research awareness, I’ve already been working and meeting with the key researchers and writers in creative industry development for the last ten years. In 2011 I initiated Brighton Digital Festival, which lead to subsequent discussions with NESTA, Wired Sussex, University of Brighton and the University of Sussex.

I was on the Advisory Board of the influential Brighton FUSE report, where I became friends with Professor Gillian Youngs, now Dean of the Faculty of Arts and Humanities at Canterbury Christchurch University. Gillian wrote one of the most important papers in my literature review, The Internet of Place (2016).  Gillian’s paper influenced my recent RSA blog about connectivity and community development and my 2018 paper for Museums and the Web in Vancouver, about developing a museum on the streets of Hastings.

As my PhD proposal nears completion, I’m sure other themes for exploration will emerge, and I’ll write about these on this blog.

Posted in Research | Leave a comment

From my archive – 2001: Communications with nobody: is anybody out there?

24hm01-revised

Paper for CULT 2001 conference, October 2001, Copenhagen

By Jon Pratty

Image: homepage of 24 Hour Museum website in 2007

[This paper was written for the first conference I attended for the 24 Hour Museum website [now Culture24] which I ran as Editor from January 2001 to August 2007. This was the first chance I had to roll out my ideas for applying journalistic practice into the GLAM sector. I think a lot of this still has relevance and currency now. I am reproducing it exactly as I wrote it, and as it was published in the proceedings, which are still on-line as a pdf.]  

My name is Jon Pratty, I’m a journalist. I’m editor of the 24 Hour Museum website, which is a UK national portal dedicated to getting people to visit museums galleries and heritage sites all over Britain.

The site has been online since May 1999 and is funded by the UK Department of Culture, Media and Sport (DCMS) through their agency Resource (formerly the Museums and Galleries Commission) and ultimately therefore by Government.

Our homepage has a busy news section, updated every day if possible, plus we write features and interactive webtrails. We are a charitable organisation. We have a searchable database of over 2500 museums and galleries all over Britain. Unlike Culturenet Denmark we do not help people digitise their collections.

What do I bring to this organisation? Well, I just joined the organisation this March. For the past five years or so I’ve been working on national newspapers: a professional journalist in busy newsrooms, writing features and finding stories: from local papers to the Sunday Times to the Daily Telegraph.

From technology to arts, from engineering to energy stories. I’ve also got lots of experience working on the web – I’m a writer on a wide variety of sites, from the British children’s site Schoolsnet to T2 on the Daily Telegraph, from Vnunet.com to Britain’s leading over 50’s website – Vavo.com.

So I hope I bring skills new to the 24 Hour Museum organisation: not just experience of how things should be written, but also front-line knowledge of what makes the average reader tick. What would a newspaper journalist think of the 24 Hour Museum? I’ll be honest here and say that when our new management team took over the 24 Hour Museum project (in March 2001) from my point of view, the site was at best, just treading water.

At worst, it wasn’t talking the same language as the people it professes to reach out to – the general public. And I’d like to say I think this is a problem across the whole museums, gallery and heritage internet sector.

My first realisation on getting the editorship was that we needed to increase regularity of site updates – no-one would buy a newspaper if it had the same stories every day – so why should public facing sites such as ours not be updated regularly?

To begin with, I went to weekly updates, (from two weekly) and now we are getting close to updating every other day. Don’t forget that updating is not just putting up new stories – it’s also housekeeping and archiving and updating database entries etc.

Updating regularly is a spectacular drain on personnel resources – so I am now working with young student journalists in our office to add extra content. In return for writing for us, they get to see their names on the web and are given some advice on writing, some mentorship, and a reference to put on the CV. We get more content. We have been doing this for five months also with web design students.

Visitors to our website can see current examples of student design work in our trail section on the site: ‘Streetstyle’ and the ‘Toy Trail’ were both designed as final year student projects. My second plan is to review the language of the site – in the past it sounded as if it was written for the museum sector – not the public. We aim to use plain English. Not art critical language. I don’t understand it – and I’m not Einstein. But I’m a graduate, with a post-grad qualification, and I know my bullshit from my bullshine.

Don’t take this lightly – only plain English and good stories will grab the eye before web viewers click away elsewhere and you’ve lost them. Flick through a newspaper in a few minutes. Some of us just look at the pictures. Some just look at the crossword. And some really odd people even look at the financial pages.

But if the page you see doesn’t contain the stuff that grabs your eye in the first ten seconds (or probably less!) then you turn over. Why is this important? It’s vital right now. It seems to me that we’ve all been avoiding the word statistic for the last two days of the conference. As a newspaper editor I would be sacked for not considering what the readers want when I plan the edition. Many cultural portals seem to be more concerned with quality of digitisation than whether the end result will ever be looked at.

The stats are important: we get around 160,000 page impressions a month. We suspect that statistics may in reality may be higher, as we can’t get our site indexed by search engines that easily, because it is generated by a dynamic database. A fix for this problem is being looked at by our software engineers, SSL.

The third part of our plan is to redesign the site : the essence of our approach to redesigning is to go back to the first principles of Neilsen: simplicity everywhere. We are stripping out the filler that the first generation of the site was littered with. There’s less extraneous functionality. No scapbook. No shopping basket. No personalised browsing experience, based on the last time you visited.

Nobody wants that stuff. To the public it’s extra complexity – they go blind when they see it. We want to grab them with ever-changing content, then take them straight to the museum they’ve asked for, which we have 2600 in our database, to see an artist they like, or find an event.

Our database now grows itself. We have a simple online form, accessed by password, which allows any museum anywhere to build a record, a web page, live on our update site. Last week we officially rolled this out after a pilot with 25 museums. 500 museums were emailed, and we have now around 80 extra museums inputting data about events, opening times and so on. There are lots more to reach, but we are pleased with progress so far. This is a free service.

My job, as I see it, is now to use mass publishing techniques to get real stats and real feedback from the public. To get bigger numbers through the doors. To make a worthwhile web experience too. Not a worthy one, a fun one.

What will we do from now on?

1. We will use simple language.

2. We will update so much you’ll have to visit every day just to read the stuff and disagree!

3. We will use the resources we have better. We’ll make our dynamic database searchable by more webcrawlers.

4. We’ll open more windows onto our database, use all it can do, instead of spending more money on something more complicated, and therefore worse.

5. We’ll do all we can to make partnerships, make links and do silly things like our Harry Potter trail (launching October 29)

6. We’ll try to remember what fun is, and we’ll try to do it!

Jonathan Pratty

[Formerly] Editor, 24 Hour Museum

References

1. 24 Hour Museum Goes Live [May 1999, BBC, website, sampled 23.04.2013] http://news.bbc.co.uk/1/hi/sci/tech/342954.stm

2. Communications with Nobody – is anybody out there? [October 2001, CULT 2001 proceedings, .pdf, sampled 23.04.2013] http://cult.kulturnet.dk/pt3.htm

3. 24 Hour Museum – from past to future [July 2007, Ariadne, website, sampled 23.04.2013] http://www.ariadne.ac.uk/issue52/pratty

Posted in Journalism, Museums and the Web, On-line publishing, Writing for the Web | Leave a comment

Cookies and EC law – what next for culture websites?

Meeting at Cornerhouse, Manchester about the forthcoming EC website cookie regulations, February 24th, 2012

A new EC law regulating use of website cookies becomes enforceable on May 26th, 2012. It means that, as web site operators, cultural institutions need to provide information for web users about site cookies used, and they need to obtain consent from readers before a cookie is set on a user’s computer for the first time. The new law will be policed by the Information Commissioner’s Office [ICO] and they have the regulatory power to levy fines of up to £500k.

In February, Manchester-based web developers, Reading Room, held a seminar about the implications of the new law for website owners and developers. RR have worked with ICO as their web contractors and also on technical advice regarding the new law. The lunchtime session was attended by David Evans from the ICO, who talked about the official position, as the deadline for compliance gets near.

Gary Bryne introduced Reading Room Manchester and told what a cookie is: a simple bit of info stored on a user’s computer to record website use, or various other kinds of info; the user’s computer gives the website back the cookie when the user revisits the site. Cookies have been around since the mid-90’s, though they’ve only been an issue for privacy campaigners in the last few years, since the rise of web 2.0 and social media, amongst other things, according to Reading Room.

So where are cookies used? Pretty much everywhere on the web, but typically, in sites with log-ins; where e-commerce sites have ‘shopping carts’; and in places like Amazon, where cookies are used increasingly to predict user choices based on previous visits.

What’s changed since the beginning of the web? Surely that simple, original, HTML vision is still how the web works? Well, no, not any more. The web has changed vastly. Cookies oil the wheels of web 2.0, tracking your pathways from place to place, helping sites work quicker for you, making them more accessible, allowing fewer clicks to get you where you want to go.

But back in those early days, it could be said, cookies were created naively, and they were often sprinkled all over web sites – most websites don’t realise how many they have. Even the ICO didn’t know how many they had themselves.

David Evans introduced the background to the new cookie laws. So why should we care about cookies? In fact, it’s because of new EC digital regulations that came into law in May 2011 – a year ago. The ICO has been talking about this for two years and now it’s got urgent, after a year’s grace period for industry to accommodate the law, according to Evans.

“My work revolves around privacy,” said Evans. “Websites today create and capture a lot of info. There was a perception that privacy was being abused. But cookies are often necessary to the digital industries.”

“The content economy needs to draw in revenue. Advertisers wanted to build revenue. Publishers and networks wanted to target ads and earn more revenue. It’s about making meaningful connections, via cookie use, to target ads better.”

And so collecting info using cookies was the start of this. They’re used to profile the interests of web users and build a picture of associative connections. A few years ago, after the first curve of web 2.0 happened, people got fed up with inaccurate profiling through places like Amazon. You know the kind of thing:people who bought this – xxxxx – also bought these – xxxx. “Ironically,” says Evans, “The same people who object to inaccurate profiling, are often the people who object to cookies being set on websites and their movements being tracked.”

So for Evans and the ICO, the new EC law isn’t actually a negative or repressive rule; it’s possibly an helpful opportunity to ensure that people are aware of, and comfortable about, what you are doing on your site to collect user information.

And as he says, people have repeatedly expressed that they want more meaningful content online. Paradoxically, according to Evans, 75% of Americans say they would not consent to being tracked online – but then who would say yes when asked that question?

The new law

This says users have to give consent before collecting cookies, but this can be given in a number of ways, including what ICO call ‘implied consent.’ ICO say there are a number of ways this consent can be given. Browser tools or tabs are mentioned as a key way for cookie permissions to be requested. This might mean a pop-up box appearing when you visit any new site asking a simple one-off question about permission to use ways to remember your visit to a site.

Browser settings are another option as a cookie control; here a pop-up box might ask users to attend to their own browser settings, for instance, by clicking cookies ‘off’ in the Tools/Options menu. At the moment, and since the web started, browsers are the way most users choose or reject cookies, if they’re bothered by them.

It’s the developer or publisher’s responsibility to sort cookie access, so the key question for developers and publishers of websites is this: what have you done to ensure consent is given? According to Evans, it’s more than that though. This could be a chance to develop a positive outcome; why miss an opportunity to be more open with your users about how you provide the services they want?

David Evans: “The old EC law was about cookie notification in the T&C’s. This law is different, it’s clearer, more prominent, in easier terms.” According to Evans, the first step is to get fully involved. Work with your developer or webmaster to do a cookie audit. “At the ICO we found out we set seven – and we didn’t know what they all did!” he said.

So it’s clearly important to know what your site is doing. The simple steps described at the Reading Room meeting was this: follow ICO advice; audit, prioritise, and review. Another message was about having a planned approach; in the first instance, look for quick wins. Update existing privacy policies and cookie info, and make the info more prominent.

See how other people do it; check where else you get online consent and how you get it. Can you adapt other routes or techniques? Look for help; industry-led initiatives that have had ICO input already will not go far wrong.

Evans: “Tell people what you’re doing. We’re not really concerned about neutral cookies, but if you’re gathering secretly, a big database of info about the public, we’d be keen to stop that.”

ICO are aware of digital sector anxiety about cookie warnings scaring off web users. Evans advises publishers to consider what you call cookie warnings; what else can you call them? It might be that other buttons or functions involving user-choice can be construed as consent. For example, consider using log-in accounts or registration; consent can then be given as part of the log-in. However, it wouldn’t be good practice to bury or disguise a notice of cookie setting.

What to do – the ICO view

1. Look for help from other organisations: the International Chambers of Commerce are researching options, amongst others

2. Try a bit of persuasion – hint at how consent and use of cookies can really make sites work, and how you might explain that to users

3. Developing an incremental approach is probably the right way

4. Recognise challenges of implementing these requirements in your strategy

5. At this stage, ICO expect orgs to set out a realistic plan to achieve compliance

6. The warning period is nearly over; after it, ICO may have less patience about implementation

7. Guidance on ICO website is clear and allows flexibility

8. It’s about thinking of your own ways to do this – not relying on ICO to recommend ways to run your own businesses

There will be revised guidance in the run up to the end of the 12 month period in May 2012. ICO will notify areas of priority where they care about things, and some notion of where they won’t be looking so hard. “In summary – we all need to be much better at telling people how our websites work!” said David Evans of the ICO. “Get out there, assess what is intrusive, and do something about it.”

Frequently asked questions

Q: Is everyone taking it seriously? Reading Room: none of the big media or business sector players are actioning responses yet. The BBC point people to a cookies policy page, and describe to people how to opt out by changing browser settings; this was described by David Evans as a minimalistic approach. Across Europe, no-one seems to be really being creative or consistent about responses . In Germany the approach taken by regulators is that prior consent has to be given explicitly to cookies, but in France and Spain a much more relaxed view is being taken.

Q: What are the barriers to compliance?
• There’s a lack of awareness everywhere, whether site users or publishers
• Ugly cookie notifications are not wanted by most commercial or business users
• Could splash pages warn about cookies? RR: ICO asked for a splash page. We said, don’t think you want that. It would have led to a repeating experience every time the user visited ICO site.

Q: What are the essential cookies allowed by EC law? David Evans, ICO: CMS cookies and e-commerce cookies. OK, so what are the non-essential cookies? DE: Google Analytics tracking – but ICO are likely to be lenient about this. With preferences for site users; forms on sites – user should be advised beforehand

Q: what are the preferred solutions?
• Implied consent – e.g. by clicking a button or tab that opens up a site section, shopping basket or facility like a user-registration panel, consent for the site to drop cookies on the user’s device is assumed to have been given. Implied consent is enshrined in English law
• Not exempt? Prior consent via permission tab, etc.
• In-page consent – simple yes or no box when sending in form
• Pro-active advisement – make it clear and simple early on in a site that cookies are being used

Q: Who is doing it well?
• Cookie Collective – looks good and makes consent a feature
• Pro-active advisement – Reading Room have developed a small animated flash object that can be put on any site, like an open source button. Look here for a demo: http://weusecookies.biz/

Q: What should we do, in simple terms?
• A cookie audit
• Update Ts&Cs with cookie info
• In-page prior consent
• Implied consent
• Pro-active advisement
• Test the ideas with users, they will know what works best

Q: How likely are fines?
DE: “ICO can levy fines up to £500k. But damage, harm to individuals, a provable case where malice or mischief is proven, are hard to evidence and prove. It’s difficult to see how we might get into a situation where a £500k fine is levied as a result of continued and personally damaging activity as a result of cookie use on websites.”

Q: What are Google doing?
DE: They are doing some work, but they are a worldwide company, and EU territorial law doesn’t necessarily have primacy internationally

Q: What happens with third party content on a website like ads served by a third party? DE: ICO would come to the site owner, not the ad server.

Q: Who can guide culture sector organisations?
DE: ICO are the people to give guidance for third sector and charity and culture orgs. The message? Get info out there, nobody quite knows what compliance looks like. There are people offering services as cookie auditors – be very circumspect about this – it could be another web2k situation.

Q: What about sites based outside the UK?
Territoriality is complex. It’s about complying with UK law, so UK law applies here. Where is the company usually based?

References and further reading

Reading Room cookie law blog post with links to presentations from the Manchester seminar – http://blog.readingroom.com/2012/02/24/we-need-to-talk-about-cookies-resources/

ICO presentation about cookie law – http://blog.readingroom.com/wp-content/uploads/2012/02/ICO_We-need-to-talk-about-cookies.pdf

ICO index page on cookie law – http://www.ico.gov.uk/for_organisations/privacy_and_electronic_communications/the_guide/cookies.aspx

ICO official guidance on meeting the law: [.pdf] http://www.ico.gov.uk/for_organisations/privacy_and_electronic_communications/the_guide/~/media/documents/library/Privacy_and_electronic/Practical_application/guidance_on_the_new_cookies_regulations.ashx

Excellent summary and guidance for HE/FE sector and GLAMs from UKOLN’s UK Web focus dept, written by Brian Kelly – http://www.jisc.ac.uk/inform/inform33/CookieLaw.html

Plain language guidance from Brian Kelly about earlier research – http://ukwebfocus.wordpress.com/2011/12/15/the-half-term-report-on-cookie-compliance/

Great plain language article by Dafydd Vaughan on the GDS website – http://digital.cabinetoffice.gov.uk/2012/03/19/its-not-about-cookies-its-about-privacy/

Good basic guidance – http://www.cookielaw.org/

News release from ICO on progress across web sector – http://www.ico.gov.uk/news/latest_news/2011/must-try-harder-on-cookies-compliance-says-ico-13122011.aspx

The JISC view – http://www.jisclegal.ac.uk/ManageContent/ViewDetail/ID/2051/What-does-the-new-cookie-legislation-require-us-to-do.aspx

How industry sees it – http://www.searchengineworkshops.co.uk/blog/google-analytics/cookies-and-google-analytics.html

Posted in Museums and the Web | Tagged | Leave a comment

Getting ready for City Camp Brighton – Expectations?

Open Data City? Why would that be a good thing? Well, it could mean local services that are good value for money and that really do the best job for you, the user and council tax payer.

The Open Data City might be a place where info about culture and tourism is all around us, making the fascinating history of Brighton and Hove really come to life in a much more joined up way; importantly, it may come to life for you as you walk around the place, not as you sit at home on a PC.

A more joined-up information space around us might mean closer connections between organisations that have content and data, and people who want to publish, make games, develop new services, connect us together as a closer society.

All lovely cuddly blue-sky stuff. Underneath the idealism, there’s some serious challenges that can be unravelled to help get things to work together; if City Camp starts to identify some of these development issues then I think it’ll be starting off in the right direction.

Firstly, for me, it’s *not* completely about quickly developing some flashy and innovative new projects. Yes, we need to make the case for open data by showing how it works with some simple but exemplary ideas. Yes, we need to paint a picture of how this stuff could really change life in the city – for those who don’t get the way that digital tech is revolutionising our world.

No – my first point is just to assert quietly about the need to survey or map the data that’s out there, and do some simple analysis of how connections could be made between the data clumps. What’s there? What’s missing? Are there digital ‘cold spots’ in Brighton and Hove? This , then, begins to turn into a city-wide data strategy.

Secondly; why have a strategy? Well, it helps if it’s someone’s job to develop clarity and quality in stuff like this. Can we have a small part in recommending that new start ups in the public sector ensure their data is free and accessible, if appropriate? Yes, I think we should. We get that chance by being organised, clear about intentions and outputs, quality and safety.

Third; who should be at the core of this? Whose job is it to make sure the open data city goes ahead? There’s no straight answer to that. My opinion, as a major media data publisher for the last ten years or so, is that it’s up to organisations of all kinds to realise that their own data has massive value and equity, and that data or information strategy must form the core of business development strategies for the future.

If you ‘own’ a business or culture niche space, and you aren’t the experts at capturing and exporting data about it, you’re ignoring the chance to nurture a key asset for your organisation. Look around you: how many companies consider the latent equity of data in their business? In the public sector, I’d suggest it’s one of the important roles funding bodies, the third sector and arts companies need to develop for the future.

Lastly, I’m hoping City Camp explores trust, consistency and factuality in open data activity. I spent eight years developing a publishing proposition that is now successfully driven by a reservoir of culture data about listings, events, venue info and more. we worked with people in culture places to encourage them to add their arts info themselves. They are the experts in this stuff, they know if it’s correct. We need data owners further down the transaction ladder to keep feeding the database; we need to incentivise them toi do it, we must meet their needs. They are the real heroes of Open Data creation.

I found that ‘crowd sourcing’ data didn’t work, if up the chain, media partners were going to be promised accurate, up to date information in consistent form, guaranteed by some sort of SLA. It’s not just about contracts with data users; one of the biggest issues we had was answering the phone to people who had visited a culture place like a museum only to find it was shut. People get angry if data is wrong or out of date.

But if you get it right, you can publish trustworthy data that others can mix and match into new products of all kinds. Look at any modern retail website, particularly something like an estate agent site; you actually are looking at five or ten different data sources melding into one coherent web publishing offer.

If public sector organisations want to support Open Data standards and opportunities, it’s key to produce data output that is good enough, reliable enough, and accessible enough to take it’s place out there in the city data mix.

Here’s hoping. See you at City Camp!

Posted in Uncategorized | 1 Comment

Culture sites and visitors – great data visualisation from The Guardian

many coloured spheres with working within them - these represent museums, galleries and heritage sites

A glimpse of the Guardian's data visualisation of visitor attractions

There’s a really nice, thought-provoking data visualisation in today’s Guardian [February 23, 2011.] Have a look at the graphic here.

It’s thought provoking to me because it shows how useful simple stats showing a national picture can be. It’s not my interest to see ‘league tables’ of museums or galleries made easy; but it’s sure helpful when it comes to arguing the case for easier digital access, web creativity and the overall development of reach and audiences.

At the moment, we’re still waiting for a simple and functional set of guidelines for measuring web traffic and usage across the public sector media space. This sort of tourism-centric evaluation of reach and visitor patterns can be used and explored further – but until we all agree on ways to measure in consistent forms, this sort of easy to understand visualisation will be beyond the digital culture sector.

At the height of the New Opportunities Fund Digital-era in the museum and gallery space in the UK there were some really simple and useful guidelines for web development; Culture Online also put together some more complex recommendations too. We’ve moved on from that time; web technologies and ways to track user pathways through social media have become much more complex.

Interestingly, it’s become somewhat difficult to track down existing resources about web measurement and standards for public sector digital work. The NOF-digi resources have been superceded by pages on the UKOLN site which are somewhat museum-centric [though still very useful indeed] and the Culture Online ones have disappeared. I’m keen to index and collate what’s out there and what’s relevant today to all kinds of creative and cultural organisations. Can anyone help?

Posted in Museums and the Web | Tagged , , , | Leave a comment

Looking outwards: making culture cloud connections

Looking around myself on  train recently, about one in three people were doing something digital. They were blasting eardrums with iPods, checking stock prices, watching videos, playing online role-playing games. Someone was even looking at a culture website!

Digital Britain is all around us, right now. We already sit within a real-time web of data. We expect our interactions and cultural output to be geared together and to make new meanings and connections as we go.

As producers and creatives our best channel to audiences in this cloud of online culture comes through managing our core information. That perhaps sounds boring and blank but actually it just means, in the first instance, having simple policies in the museum or gallery to ensure core name and location info about our venue is consistently described then indexed correctly in Google.

That’s right. In the midst of all the perplexing and ever-changing technology we use today, the first step to cultural discovery online is just to use words intelligently to describe your stuff. When you master that, you can allow participatory pathways into collections, exhibitions and more.

If you do it right, someone else may want to share your content or info, or re-use it in another form.  When this happens, your data takes on new value; something we might call knowledge or information equity.  Equity? Does information have value? You bet.  Mobile computers – like iPhones or Android phones – are taking over as the first point of digital contact for many people these days, and they need data.

In his fascinating recent pamphlet about Cloud Culture, written for the British Council, Charles Leadbeater starts to explore the meanings, politics and moral challenges of putting this culture in the digital cloud.

To my eyes, Leadbeater sees too much danger and negativity in these geared and connected data spaces; it’s a chance for those who already watch us too much to watch us even more, he warns.  In reality, we’re already rigidly connected to countless databases that don’t operate in any sinister way at all.  When we book an airline ticket or tax our car we use the cloud of data.  It’s been making connections for us for the last ten years at least.

Forget the more pervasive big brother, the real-time web brings major gains for us as cultural producers: we can now develop data-led ways to put art into a relational landscape where it can begin to be judged and re-contextualised in a wider social space.

We can use real-time web info to market and promote arts and culture in regions where art and tourism are part of the regeneration agenda.  We can offer community arts projects online access and digital partnering opportunities with other groups situated more remotely.  We can tag items in collections so that stories can be woven between objects, places, eras and languages.

If we’re up to the challenge, we could make the new web work for us, not against us, as Leadbeater seems to imply it might.  Cloud culture may allow new kinds of creativity and digital innovation.  Is it possible for us to develop a new, more culturally-inspired or connected YouTube or Flickr? Perhaps; but let’s not forget sites like those morphed out of the very close relationship between academia and Silicon Valley in the States:  I’d question whether we have developed such fertile collaborations here yet.

Sergey Page and Larry Brin developed an idea for a search engine with a difference while at Stanford University, and they got Silicon Valley to invest in Google. The proximity of high-end tech companies, the culture space and universities in the US drive a lot of innovation in our web today.

Arts Council England’s ‘Achieving Great Art for Everyone’ consultation proposes the arts drive our creative industries; I’m assuming this phrase refers to conventional culture industries.  In the US, the creative industries are very closely aligned to new media and tech labs. There’s massive convergence between funders, galleries and tech companies.

TechSoup is a prominent stateside funding agency that blurs boundaries between sectors, funding types and agendas. In the UK, the only dedicated charity or trust that funds digital is the Nominet Trust. Seen side-by-side with Nominet recently at the National Digital Inclusion conference [NDI10] in London, TechSoup shone out as a beacon of developmental excellence, with partnerships linking the commercial web developer, culture venues and social development agencies.

Now shine the torch around us on this side of the Atlantic.  How many arts projects unite objectives/outcomes/ideologies from multiple sectors? Technologies like the real-time web, accessed by easy-to-use mobile platforms like the iPhone, give us a fascinating opportunity to converge the interests and agendas of many from within and outside of the culture sector.

To make good developmental connections now, I think we should closely look at the way ‘Stateside agencies like TechSoup have woven strong connections between commercial developers, culture agencies, universities and mass media organisations.  At a recent Arts Council-sector gathering I was struck by the almost total absence of professional experience from outside the public culture sector.

I think we need to look outwards more.  It’d be great to have a tech director at a National museum who comes from a retail, manufacturing or FMCG digital environment; but that will take a leap of imagination from current museum trustees.  No time like the present though!

Posted in Museums and the Web, Uncategorized | Tagged , , , , , , , , , , | Leave a comment

Why isn’t my museum on Google Earth?

Adele Beeby from the East Midlands asked this question today (March 10, 2009) on the e-List of the Museums Computer Group:

“Hi everyone,  I’m hoping someone can advise me on an issue we’re experiencing with Google Earth.  I’ve been asked to check that our Museums (and Country Parks etc.) appear on Google Earth and noticed that,  for example, Bosworth Battlefield has about 8 different entries – only one of which is in the correct geographical place and only one of which has the correct name  “Bosworth Battlefield Heritage Centre and Country Park”.

I think the problem stems from the Google Earth entries being fed from various different websites, each using User Generated Content (UGC),  so perhaps mistakes are inevitable? Has anyone else noticed this problem and how have they dealt with it? Thanks in advance!”
Adele Beeby

What a fascinating and topical enquiry! Sadly there’s no immediate remedy, but it raises lots of questions about how we in the museum/culture sector best interact effectively with major information providers like Google. And it’s currently something MCG members have been posting about, in threads about the Digital Britain report, and also Dan Zambonini’s challenge to nominate functions and scope for museum API’s. Dan asked – “if you could have an API in your museum, what would it do, or be for?”

I see these strands as closely related. Google Maps, and more recently Google Earth, have not been using any sort of ‘official’ data source for museum, library, archive and gallery venue info and location data. Most people can see it would be good for Google to be able to deal with one trusted, checked source of info for these useful types of information. I typed ‘Malvern Museum’ into Google Earth and got five different answers about where my local museum is, all info from different sources. Plainly useless. Agreed, one or two ‘reviews’ popped up too, and they were useful in a sense, but the reports were old, uncheckable, and ephemeral in a publishing sense. At the moment, I don’t think web users find this sort of info in any way useable.

So wouldn’t it be great if the Digital Britain report began to sketch out ways that centralised knowledge management could be delegated to one or national museum body, so it could take responsibility for co-ordinating collection of basic data about museums – things like venue info and location. Then Google just talks to one agency and gets the data in one live channel. [Of course – we already have the possible technical means to do this in the form of Culture24 – and that’s no accident, it’s been something the team in Brighton have been keen on for a long, long time…]

Why is centralised knowledge management (in some form or another) important? Everything needs to be paid for, infrastructure needs putting in place and it needs to be comprehensive. The place where info ‘pivots’ is the place to gather it. There’s not a lot of point in the data being generated regionally, one area at a time; a big player like Google wants national coverage, straight away, and it needs to be up-to-date, live and covered by some sort of service level agreement.

I know we all are keen on museums being participative and socially responsive, but the Google Earth example clearly shows why, when factual, unshakeable, reliable location data and core venue info is concerned, a more systematic approach would work best. So I’d suggest the best placed core aggregators of culture venue data should be funders or govt agencies (or their partners like MLA or agencies like Collections Trust). How do we get people to play ball and use the system? I think it should be a rock solid funding requirement for projects and venues that payment only comes after core info is entered into the publicy availalble, free-for-use, uber-database.

A large culture agency that I have worked with in the past still has no central database of projects, or funded venues, or collection objects aquired; I think a Digital Britain strategy needs to get to grips with such information deficits urgently and make cultural data acquisition a strong organisational priority. Just imagine 25,000 journalists turning up in London in 2012 and there being no trustworthy info on hand about our culture and [sporting] heritage…

Posted in Museums and the Web | Tagged , , , , , , , , , , , , , , , | Leave a comment

New museum web project Creative Spaces sparks debate among web experts

The Museums Computer Group, the major web expert group within the UK museum sector, recently saw a passionate and erudite exchange of emails all provoked by the unveiling of the new Creative Spaces web project. (That’s the Wallace Collection node of the project)

Writing as a committee member of the MCG, I think this has been one of the best ideas threads we’ve had for a long time. Yes, it’s been passionate, and that does indeed get people thinking, and firing up laptops in reply.

I think voices who advocated tact in the exchange (Nick Poole, myself via Twitter, and others) did so because we’re already engaged in working with museum people all over the regions, not always in the most glamorous places; we’re all working for peanuts, doing about ten million things at once, including managing that puzzling interface between museum directors and the onward march of digital technology…

To me, that’s one of the reasons there needs to be some tact in the way we review each other’s projects; if you’d been behind the scenes of projects like NMOLP you’ll have seen the sort of passion it arouses. I also saw people (like Terry and Carolyn, and the teams of writers like Rachel and Rowena L) working like absolute stink to get the project done, and ploughing through all sort of effluent to manage relationships across and through the project. Those who stuck the course deserve medals.

I think the emotionality was also caused by the big fees funding the project – big ticket jobs like this cause a certain amount of envy, and that too, leads to comment that doesn’t always please. One gets a picture sometimes of vast (National museum) battleships manoevring around a smallish patch of sea, each one guarding it’s own flanks, carefully manning the bulwarks, in case a stray shell cuts the rigging, or someone jumps ship.

Best things coming out of the Creative Spaces debate for me?

A) The emerging discussion about ‘the plumbing’ (nice metaphor from Paul Walk) being the first job to tackle when working on these complex cross collection projects. Yep. Of course the data scheme underneath is critical. The website (if there needs to be one!) should be sat on top of the database well down the line of projects like this. How you get the data, on what (copyright)terms it’s given, and how the data is related and relational is the first key task.

B) Another plus has been the thread (from Frankie, Mike E, Kate Fernie and others) about how social nets work in reality, and why you might want, or not want, to play for a while, culturally. This stuff needs to be explored more. Already one or two culture orgs have made abortive attempts at trying to get things going, and they mostly failed ’cause they didn’t spot that sites get massive visits when they get the bigger publishing picture about mass audiences, massive budgets and massive human resources and tech support. That insight mainly comes from expertise that’s mostly, at the moment, outside the museum sector.

C) We’re starting to get the idea too, that the cool culture venture we dream about here might not be a big project, but smaller-scale, evolutionary, more experimental, more informal. There aren’t any more big pots of money (like ISB)now for this kind of work. We’ve got to be coming up with sustainable and scaleable ideas, so some wisdom about the scope and depth of project concepts needs to be found when ideas are still at the back of an envelope stage.

My interests in this?

I’ve long evangelised (and written about, in 2005)’the inside out web museum.’ At my former workplace, my enthusiasm for a more’datacentric’ publishing offer drove quite a bit of our re-design thinking, though the final realisation of those ideas is still in the pipeline. But look outwards at recent tech trends and think about how near we are to some sort of breakthrough. We’re wrong to expect a ‘killer app,’ but continuous development and playful experimentation like the (Mike Ellis) Mashed Museum sessions at UKMW08 will get us nearer to some sort of nirvana.

Where to go now, post-Creative Spaces? We ALL need and deserve (as a sector, everywhere) access to data channels that come to us, and do the neccesary spidering and data mining to make the most of all the content we might choose to expose and share. And, importantly, let it be live data exchange, not a day old, or a week old, or some such OAI-harvested old hat. The next culture web must be live; after all we have come to expect that through our day to day fun with Twitter and FB.

To get live, we need APIs; they are, of course, the way forward as Richard Light, Mia and Mike all say. API’s need standards, and Collection Trust work with DACS and towards the new BSI data standards is excellent.

Sharing freely and offering culture content to others for their own use opens doors to commerce and business models, so some movement there gets us towards a more commercially-geared culture web.

And finally? The success of #hashtags on Twitter (check #fakeanimalfacts) proves people can come up with vocabs and impromptu syntax that bind humour, culture, conferences and news together using simple XML. My research interest now is to see how we can map some simple #-like tagging and vocab structures (and maybe the National Curriculum) so we can have cultural fun without needing to build big and expensive portalised web projects…

JP

Posted in Museums and the Web, Uncategorized | Tagged , , , , , , , , , , , , , | 2 Comments

Finding #Darwin – Where’s the Twitter Map?

Just a quick post this morning to follow on from yesterday’s #Darwin fun. While writing yesterday I was looking for the link to the lovely animated Google Earth 5 #uksnow tweet map, and only just found it again, so here it is: http://www.barnabu.co.uk/uksnow-twitter-animation-google-earth-5/

And if you tire of watching that vid of the animation, Barnabu has done a browser-borne version of it, minus the rather fetching music: http://www.barnabu.co.uk/visualizing-twitter-activity-inside-the-google-earth-plugin/#more-494

(Just for once, I haven’t hidden those links behind easy, short urls.)

This lovely socially-generated, but individually-curated work made me think about how people ‘situate’ themselves in cultural terms. Is it important that #uksnow tags have geodata? Yes, because there was a collective or memetic agreement, an agreed context,  that taggers were buying into when they Tweeted using the # tag.

But take the situation across to the cultural space, the place where today #Darwin taggers may well slow Twitter down, and there’s less understanding of the informational context in which people are #Darwin tagging. I can see that it would be great to be able to see where in the world people are digitally remarking about the founder of evolutionary theory.

It’d be interesting to analyse the mix of political, religeous and cultural cues that result from a geographically placed map of #Darwin tweets. Where might it take the debate between creationists and evolutionists if we can visually show the geo-distribution of the protagonists?

Getting people in the arts to begin to think about place in digital terms sounds really geeky, but when you suggest thinking about Barnabu’s #uksnow map and, say, landscape painting, or poetry, people might begin to embrace some ideas around this. As I wrote yesterday, it’d possibly need some central co-ordination, inspiration or creativity to sketch out some agreed #tags for art/artist terms or vocabularies.

Maybe that’s a useful role for cultural authorities like the Arts Council; in the past, however, ACE have shown no interest in centralised informational policy.  There’s no time like the present though…

Posted in Uncategorized | Tagged , , , , , , , , , , , , | Leave a comment

Charles Darwin gets web 2.0 and joins Twitter!

Charles Darwin goes on Twitter - is he more Web 2.0 than you?

Charles Darwin goes on Twitter - is he more Web 2.0 than you?

Just when you think the world of information science and the web has gone to sleep, bored to tears with endless discussions about when the semantic web will pop up, along comes something fabulous.

Hard on the heels of last week’s fascinating #uksnow Twittering and the lovely animation of tweets across Britain as the snow rolled us over, this week we’re being over-run by Darwin200 tweets using a #darwin tag.

Naturally the great man himself is Tweeting from beyond the grave – if you’d like to follow him he’s @cdarwin, not surprisingly. I wonder if he’s got a netbook with dongle, an N96 or an iPhone? I don’t suppose there are many powersockets on The Beagle. Have a look at his homepage on Twitter: http://twitter.com/cdarwin

Please can someone now do a #darwin mashup map so we can find out where everything is? Over the next weeks and months a string of events are being held all over Britain.  Check out http://www.darwin200.org/ .  Disappointingly, while a few months ago there was a rudimentary RSS feed of D200 events, it doesn’t seem to be around any more. The D200 site seems really flat and web 1.0.

Thinking about Twitter tags, these user-tagged info clouds could be great low-tech, high-flexibility models for socially-driven information creation. I think it’s fascinating that within just a few weeks, people are making up their own tag taxonomies, placing them in a networked environment, and letting nature take it’s course. Kind of like Darwin, really.

What’s next? A simple, standardised list of artist names, eras, types? It’s not that complex, because what seems to be happening is that users quickly twig which is the most powerful or sticky #tag to use and then the memetic effect that seems to energise Twitter takes over, and the #tag goes everywhere.

Meanwhile, check out the latest #darwin tweets in my RSS feed box up there on the right of the Machine Culture homepage.

JP/Feb 10

Twitter: @jon_pratty

Posted in Museums and the Web, syndication | Tagged , , , , , , , , , | 8 Comments