Managing an ageing workforce

I spent yesterday morning at the launch of ‘Managing An Ageing Workforce’; this is a new report sponsored by the CMI (CharteredManagement Institute) in collaboration with CIPD (Chartered Institute for Personnel and Development).  Their survey looked at the response to changes in retirement age and retirement practices, and how prepared organisations were for these changes.  Since the survey was done, the coalition government has announced that the Default Retirement Age (currently 65) will be phased out.

I was lead author on the report, which was very much a multi-way collaboration.  Retirement is a complex business.    On the one hand,  pension limitations drive people to work longer; on the other, the recession has led to companies searching for ‘natural wastage’.   Older workers can be seen as reservoirs of deep experience, or as people past their peak. Old hands versus fresh blood.  It’s a challenge.

The report itself discusses perceptions and preparedness in some detail.  At the launch event, lack of organisational preparedness was the main theme.   There were some terrific case studies based on a diverse set of organisations.

One final comment: the discussions reminded me of earlier debates on maternity leave.   Retirement, like maternity leave, is often handled on a highly individual basis.   Legislation sets the parameters of what is possible, but eventual plans are often individual and idiosyncratic, based in some part on the value that the organisation places on the individual.  Like maternity, retirement involves a huge identity shift; and (like maternity) it can be a difficult subject for individuals to discuss in advance.   Publications like this one do a very useful job in helping us have those conversations.

Advertisements

Does your survey reflect reality, or is it just wishful thinking?: Lessons from the Mirror of Erised

Old Mirror

Old Mirror, by Sthetic, on Flickr

One of my favourite devices in the first Harry Potter book, Harry Potter and the Philosopher’s Stone, is the Mirror of Erised.   Harry comes upon this old mirror one day, and when he looks into its depths, he sees his (dead) parents standing behind him.   His friend Ron sees himself winning at Quidditch.  The mirror, it is explained, distorts: it shows the viewer their heart’s desire rather than ordinary reality.   (Incidentally, Harry nearly wastes away gazing into this mirror, hungry for the vision it shows him. But that’s by the by).

Only the perfectly happy person would look into the Mirror of Erised and simply see themselves.

Wistful dreaming of a perfect world is quite a feature of the surveys I get to fill in.   All too often, the people who create questionnaires and research studies seem to concentrate on all the things that are important to them, and utterly neglect the wider context.  The result is a weird distortion of the customer’s reality.

Facebook’s ‘Like’ button is a nice example of this.  ‘Like’ has become a vague indicator of the number of fans that a person or item has. It’s very little use an actual indicator of liking, because there’s no context.   To get some sense of real levels of liking, you might wish to know about dislike, or degree of liking.

In other surveys, I can complete page after page of grids, and yet at the end feel that the questionaire never really captured my opinion.    Perhaps they grilled me on my attitudes to, say, Bluetooth, without asking me if I knew what it meant, or whether I had it, or whether I was convinced it would fry my brain.

Of course, we’re all guilty of distorting our vision to a certain extent.  The problem is that the distortion can become so great that you can fail to collect the simple, essential information that will allow you to make sense of what’s going on.

Some tips on avoiding the ‘Like’ trap:

1. Always get the context

Spend time asking the basic questions.  The most important questions that I ask in an interview or focus group are not the ones in my brief: they’re the simple ones about who you are, what you do for a living, how you keep in contact with your friends.  This bedrock information will help you make sense of feedback.

2. Get an outsider’s view

Check your questions with an actual person who is not part of your marketing/PR/social media enclave, and then take up their suggestions.

3. If in doubt, just ask people what they think

You would be amazed how often no one thinks to do that.  An simple open question works wonders. More context, more emotion, more reality.

4. Accept the feedback

Lots of feedback is a bit random or even negative.  Some of it will be right outside your remit or indeed God’s.  Make sure there are ways of passing information onto the other people who can act on it, but then listen to what people are saying.

This can be damned hard, but sit there, quieten all the voices in your head which are yammering about the unfairness of it all, and just listen. What are people saying?

Now you’re in a position to make a meaningful decision.

POSTSCRIPT: I’m moving my blog over to a different host in the next few days. Please excuse any mess that may ensue.

63% of poll results are entirely made up

The Home Office published a report last week, entitled ‘Sexualisation of Young People.’ It was trailed on the radio, along with some of its radical recommendations, which include relegating ‘lad’s mags’ like Nuts to the top shelf.  It’s an entirely worthy subject, and as the mother of a young teenage boy and a preteen girl, I was pretty interested in what it had to say.

The report author, Linda Papadopoulis, states firmly in the introduction that:

This is not an opinion piece, the evidence and arguments presented within this document are not based on conjecture but on empirical data from peer reviewed journals, and evidence from professionals and clinicians.

Unfortunately, as I read through, I became increasingly distracted by the type of evidence being presented in the report.   Because I am a huge geek, I started going back to a few of the sources being mentioned, to see whether they were really saying what the report author claims they were saying.  

And right now?  I don’t really trust the report.  

Here are some of the things that make my spidey-sense tingle:

 1. Vagueness about method

There are references to focus groups, but no detailed information.  The main evidence gathered appears to be that provided by the organisations and pressure groups consulted, together with desk research. 

2. Imprecise use of opinion poll evidence

‘A recent YouGov survey found that 27 per cent of boys are accessing pornography every week,with 5 per cent viewing it every day.’

Which YouGov study? How old were the boys? How do you define pornography? (I’m not being deliberately critical – in a discussion of sexualisation, there is a huge difference in meaning if the boys are 16 compared to 10)

‘Almost half of children aged 8-17…’

Why is an 8 year old being lumped in with a 17 year old?  This makes me suspect that the sample is not big enough to support a more detailed breakdown.  I also want to know more about the methodology of a study that gathers this information from teenagers and children.

3. Uncritical use of ‘voodoo polls’

 
 

Surveys have found for instance that a high proportion of young women in the UK aspire to work as ‘glamour models’ or lap-dancers.

 

Do they indeed? That’s appalling.  The reference here is ‘Deeley, 2008.’  This tracks to an article in the Times, citing a web survey conducted by an internet TV company called Lab TV.  63% of 1,000 girls questioned by Lab TV apparently thought Jordan was a good role model.  I say ‘apparently’ because that’s as far back as I can track this one – unless Lab TV is the one belonging to the (American) National Defense Education Programme. Which I doubt.  Still, I bet it was a great sample!

Mind you, it’s really not the same question, is it?  Thinking someone is a good role model isn’t quite the same as wanting to grow up to be that person.

4. Drawing on US evidence in a UK report

In the US, the number of magazines targeting the teen market rose from five to 19 between 1990 and 2000.

And the point would be? In an argument about inappropriate sexualisation in UK culture, the state of the US market is not obviously relevant.

5. Argument from dramatic, unsubstantiated examples

Padded bras, thongs and high heeled shoes are marketed and sold to children as young as eight.

I do think that girls’ clothes are often inappropriate,  but as mother of an 8-year girl, I can’t say I’ve ever spotted these examples.  Princess sandals may count as high heels. I know the point that the writers are trying to make but I would rather see it done with more everyday examples.

6. Straight-out confusion

 

Before the mainstreaming of internet access, it was asserted that the average of first exposure to pornography was 11 for males.  However, research suggest that this age is now much lower.

 

There are two references quoted to support this statement.  The first one is a 1985 reference – the pre-internet one.  The second one is ‘Greenfield, 2004.’   I Googled this paper, which is actually evidence presented to a US government committee.  It in turn references various studies which tried to estimate the age when adults first remembered coming across explicit sexual material.  It doesn’t support the ‘much younger’ argument in any way that’s obvious to me.

As I said, this is a subject that’s close to my heart, as a parent who is constantly trying to negotiate these issues in an internet-enabled world.  My first reading of this report, though, is that it is indeed mostly opinion.  That’s a disappointment.  Opinion is very important in this area, but I think the shakiness of some of the references quoted really does the subject a disservice.

That’s why it’s called ‘research’

A wee rant.  I came across this conversation about online communities on Research Live.  There is a discussion of the pros and cons of research-based online communities, branded online communities, and right at the end a commenter who says that all this community talk is ridiculous and simply listening to internet buzz (via networks like) Facebook is the way forward.

Listen, my children. Many many years ago, I was a wee trainee research manager for a company that did a very boring thing.  We made the fragrances that go into washing powders.    We did not think this was at all dull.  We lived and breathed functional fragrance (quite literally, marketing was right next to the factory).  We researched all sorts of things. We did sensory research, perfume trends research, international laundry research*, brand positioning research.

The one thing we couldn’t do is listen in to a general conversation because for the most part the ‘moment’ we were researching was transient and private.

So it is with many products and brands.  For every Facebook and iPod and Easyjet and Carling Black Label, there is a product which is humble or private or low-key or taboo or just not terribly interesting.  It may be everything to its creators, but it doesn’t generate talk.   This does not stop the producers of these things from wanting to find out what people think.

A research community, like a survey or a piece of qualitative research, is a way of lining up your users and asking them to talk about something they may scarcely think about, day to day**.  When it comes down to it, your customers may have vivid experiences and strong opinions which would never see the light of day outside of the direct conversation between researcher and user, or brand and user.

Don’t get me wrong, online metrics are important and of course you should collect them; but in many cases they will be absent, deeply uninformative or even misleading.   Also: (deep breath) not everybody is online; not everybody important to your category is online.    They’re certainly not all on Facebook.    And I’m flailing in frustration now, but really, systematic research is one of the best methods of finding out what people think of your (slightly boring, not-dominating-Twitter) thing.

*Anyone who thinks that it would be impossible to talk about washing powder for very long is sorely, sorely mistaken.

**For example, blank video tape, back in the day.  Try mining that.

The curious case of the game show neuroscientists, or how NOT to research an online community

I’m a fond member of the blogging/social networking site, Livejournal.   Over the last few days, I’ve seen the most incredible shitstorm unfold, over the cack-handed efforts of two rogue academics to research what they were pleased to call ‘the cognitive neuroscience of fanfiction’.

Background

First, a bit of background: Livejournal (one of the original social networks) is a vast and varied set of subcultures, and interconnected blogs, dominated by film, TV, book and gaming fans.    It is more counterculture than culture, really: it tends to be left-wing, creative and anarchic.

One of the many subcultures in the mix is fanfiction writing:  stories that people write using characters from books, film, music and TV.  Fanfic writing is female-dominated, and some of it (but by no means all) is very explicit.   There is fanfic for everything, from Jane Austen through Doctor Who (rewriting the works of Russell T. Davies) to The Mighty Boosh.

Fanfic writers have an odd hobby, but they are a pleasant and literate bunch who are much studied by academics.   In fact, academics (like Henry Jenkins) completely adore this stuff  – it pulls feminism, transgression, social networking and copyright laws all into one place. What’s not to like.

The questionnaire is launched

Anyway, a few days ago a friend forwarded me a link to an online questionnaire that she found intriguing.  It was about fanfiction, it seemed a bit amateur, and what did I think of it?   The link was banner-style, and it looked a lot like the Cosmo-style pop quizzes that are memed all over the place on social networks.   There was a reassuring link to a FAQ page giving the names of the researchers, Ogi Ogas and Sai Gaddam, and their academic affiliations at Boston University (the BU links no longer exist).  This page also gave a long explanation of their interests in cognitive neuroscience, and what this had to do with fanfiction…

“We’re deeply interested in broad-based behavioral data that involves romantic or erotic cognition and evinces a clear distinction between men and women. Fan fiction matches this criteria perfectly.”

…Uhuh.

The researchers had apparently also consulted a couple of well-known bloggers in the areas, and got their guidance and feedback and endorsement.    Apart from the fact that the academics weren’t making the changes suggested, it all seemed fine.

The online questionnaire itself (captured here in two parts on an LJ Dr Who site – you may see an generic age warning for content on some LJ pages) was a rather different story. I took a look.  There were  70 questions in all (one per page), starting with some brusque questions about one’s gender, age and ethnicity.   It even asked for your SAT scores.  The questionnaire proceeded to a number of fantastically detailed and rather odd questions dealing with fanfiction reading habits; and then it got heavily intimate, asking (amongst other things), exactly what kinds of sexy stories the respondent read and (deep intake of breath) whether they ever had rape fantasies.

The questionnaire… does not go down well.

The questionnaire was barely up before LJers started complaining about the content.  LJ people love to complain at the best of times, and there was a lot of ground to cover here:

a)      Terrible questionnaire design

b)      Inaccurate, amateurish and homophobic wording

c)      Prurient lines of questioning

d)      No attempt to screen out under-18s

e)      Lack of the usual information on privacy, anonymity and confidentiality

f)       And (my favourite) frequent criticisms of the methodology.  How in the name of heaven the researchers were going to draw any valid conclusions whatsoever about subcortical processing, given their data collection methods?

What the researchers hadn’t bargained for was the thoughtfulness of the response.  Livejournal people are a fairly literate bunch.   Stuff like feminist analysis of television casting decisions is a walk in the park for many of them.   At least some of the people who came across the questionnaire were social researchers, lecturers, feminist academics, and indeed neuroscientists.  They didn’t like what they saw.

Ogi attempts to engage with respondents

The lead researcher opened a journal (now showing a single entry, an apology) for the purposes of answering questions about the research; and in the space of about two days, that journal moved from polite, rather subservient requests for clarification, to a full-on flamewar, as the lead researcher put up his questions for comment. As he engaged, he revealed more and more of his (very strange) thinking (he’s deleted his comments on this thread, but you can work some of them out), and his subjects began to research him in earnest.

Google is your friend (and Wikipedia, and Youtube)

Turns out, Ogi Ogas had forgotten to mention a few things:

  1. He wasn’t actually affiliated with Boston University any more
  2. While they were indeed neuroscientists, their Ph.Ds were on visual processing and artificial intelligence
  3. The lead researcher’s Ph.D was funded by the US Department of Homeland Security
  4. The lead author gained earlier infamy as a successful contestant on the American version of ‘Who Wants To Be a Millionaire’

And, last but not least, there was another teeny fact missing:

The authors had just signed a substantial book deal with Penguin for a popular science book entitled: ‘Rule 34: What Netporn teaches us about the brain.’

(As one commenter put it: ‘What? You think we can’t Google?’)

(NB – the literary agency has changed the book title now, to ‘Rule 34’)

So they asked about these Netporn theories, and then the shit really hit the fan.  It’s hard to follow the logic, but his theory (screencapped here)  drew on data-mining of adult sites aimed at men, and posited that explicit fanfiction for women could be equated with male interest in male-to-female transsexuals  (?!) and that both of these things could be used to model subcortical processing (whatever that is) in male and female  brains.  Or something.

Somewhere around there, people stopped arguing with him and started taking direct action.  The academics started complaining to Boston University, the creatives started creating cat macros, the neuroscientists started writing long introductions to neuroscience and the specialists in gender identity just started screaming.  There were a few more updates, and then Ogi locked his journal.  He issued a few wandering emails, and removed most of his journal (and indeed many of the comments that he’d left elsewhere).  Naturally, the LJers (being used to the ways of flamewars) took screenshots of the more alarming content well in advance.

Aftermath

From beginning to end, Ogi Ogas maintained that he wasn’t doing social research, he was just collecting data.

The day after the shitstorm, someone reported their conversation with the University of Boston’s rearch ethics board: he wasn’t formally affiliated, and he didn’t have ethics board clearance.  His university pages have now disappeared, the questionnaire is down, and at time of writing, he seems to be deleting all his comments elsewhere.

On the face of it, this is simply an extreme example of shoddy and unethical  research which will reflect badly on anyone who tries to do research online, especially within a community or subculture.   Anyone who approaches that particular community in the future is going to encounter deep suspicion.

It goes further, though.  One of the very odd features of the whole story is that Ogi Ogas and his colleague took a lot of care to approach prominent people. He got a great deal of help from some of them (he also got a magnificent brush-off from one, but that’s another story*).   All of those people are writing to explain that he seemed genuine, and they trusted him.   They offered the same critique of the questions that anyone would.  He seemed to listen, but went ahead with his own version.  This is either arrogance or sociopathy.

One of the people he approached has written to apologise for being taken in, and to reprint some of their correspondence.  She warns him that his attempts to research this particular community are probably dead in the water.   In his reply to her, he’s chirpy.

‘Eventually we’re going to go through this all over again with the far right. It will be interesting to see who throws the meaner punch.’

And I’m left thinking: is this the ultimate troll?

The book is due out in 2010.

*My favourite part of these people’s very lengthy smackdown is the grand postmodern refusal:

‘And so we decline to be interviewed by you; we decline to be the objects of your fascination; we decline to be naturalized; we decline to allow our political project to be cited in support of the very discourses we are trying to question.’

ETA: When respondents bite back

I actually hesitated in writing this up, because I was worried that mainstream researchers will see this as a distant kerfuffle in an unlikely subculture.   But I agree strongly with the writer at the Rough Theory blog (see below), who suggests that Ogas may fail to take valid community criticisms seriously, because he has so thoroughly Othered them as respondents.  In other words, ‘they’re so weird, we don’t have to be careful with them.’

The second general learning point for anyone thinking of attempting a controversial online questionnaire, is how quickly things go viral.  Ogas was terribly happy about the response rate (reliability and validity were not a concern); that same speed of process led, very rapidly, to critique, opprobrium, and direct action.   Before you engage?  Do us all a favour and go on that Methodology course.

Some other quick links:

Rough Theory’s roundup

Unfunny Business’s summary of the whole mess

Feminist SF

Jonquil’s thoughts on respondents who bite back

Words and labels and ambiguity

This Tuesday I went along to the Occupational Psychology/Organizational Behaviour catch-up day at Birkbeck and experienced the luxury of a full day noodling around new research concepts and the thrill of hearing the word ‘critical’ from someone else’s lips.

It was an interesting mixture of heavy content and some of the worst Powerpoint I have seen.   Some of the standouts were actually the Masters projects, rather than faculty.

the stand-out talks

Diane Burns presented a nice project based on discourse analysis, looking at the meaning of ‘collaboration’ in a health service setting.  I liked the concept of ‘strategic ambiguity’ – the ways in which people completely fail to define what they are doing, so that they can do lots of different things. Or maintain their power, mwahahahah.

Michael Clarke presented a piece (with photos!!) on the meaning of fatherhood for men working in new media.  This was a delightful talk, introducing the ‘hero culture’ of new media working, and the challenge of overlaying identity as a father onto a youth-driven, youth-pretending industry.  Michael’s talk was the one that made me want to run off and check out some theory (on performing identity), because I’m sad like that.

Andreas Liefhooghe (faculty) gave a perplexing talk (with many, many photos) which personally I found intriguing yet profoundly irritating.  He has done a lot of research on workplace bullying, from a critical perspective, and he drew on Foucault to talk about the difficulties of conceptualising bullying – introducing ‘bullying’ produces a bully/victim binary categorisation that casts people into certain roles. OK, it was more complex than that.

The talk was illustrated with lots of still black and white shots of institutions (the prison, the Panopticon) and people as victims.  It felt extremely manipulative, both as Powerpoint and as talk…verging on the Emperor’s New Clothes in the overuse of bloody Foucault…and yet.   In the midst of my annoyance with it, I could see what he meant. The reduction of personal difficulties to simple categories (stress, bullying, harassment) and binaries (harasser-victim) can set off long chains of unintended and unhelpful consequences.

the joy of labels

What he overlooked, though, is the relief in naming something, even if that naming is a little bit wrong.   It is the difference between listing vague symptoms (I’m sure he’d appreciate the medical analogy) and realising that it all adds up to a recognisable syndrome.  Invoking the label creates a positive basis for action, too, even if there are many things wrong in using the label.

and your point?

But, you say, what is the relevance of all this fine theory to anything practical at all?

I think…I think for me, it focuses me on the need to learn about what is there in front of me (in a qualitative interview or in a questionnaire), where the co-creators of the study want to push it down some well-worn paths and offer some very familiar interpretations.  Many times, the issues don’t fit those neat categories, and it is very tempting to adjust things so that they do.  So, I’m reminded of the value in hanging onto ambiguity and interpreting the thing, not its reflection or its near neighbour. If that makes any sense.

Learning to love the stacks

Books in a Stack, by austinevan on Flickr

Books in a Stack, by austinevan on Flickr

I’m just starting a new project that involves doing a literature search before getting stuck into interviewing.

I spent untold years doing ad hoc research, in a cab-style ‘take the next project’ sort of way, and I can’t tell you what a complete pleasure it is to be allowed to go and look at the literature.

It’s maybe peculiar to the places I used to work, but there was quite a disdain towards Doing A Bit of Reading, and a complete horror of exerting one’s database search skills with all the Boolean logic which that entails.

Not all projects need much beyond basic immersion in what went before, but I do projects which involve setting up a new Thing, like a framework, as well as gathering data. What I have learned:

4 benefits of full-on research:

1. Not reinventing the wheel. NOT REINVENTING THE WHEEL!!! So many, many things have been done before, some of them rather well.

2. Allowing time and space for you to really consider the issue Nuff said.

3. Constructing new knowledge Outside well-researched topics, typically you’ll end up synthesising information from a grab-bag of odd-looking books, papers, and commentary.  This is marvellous.

4. You can come up with theories and hypotheses! And then test them! Possibly just me…asking the right people the best questions is a luxury indeed.

And so on to …*drumroll*

7 tips for a successful search 😛

1. Join a university library. This can be hard work without current connections, but there may well be a great, relevant library that you can use.  Online catalogues look like the answer to your prayers but in practice can be expensive and difficult to access, depending on the subject area.  Use your academic or professional affiliations to get access to books and journals.  I use the Wellcome Library,  Birkbeck College,  and Cambridge University Library; I also have access to the British Psychological Society’s collection.  Wellcome has a great cafe.

2. Recognise that opinions are legion but facts can be sparse. Most literature reviews outside academia involve several Emperor’s New Clothes moments: typically, the absence of any evidence whatsoever for strongly-held popular opinions.

3. Web resources look infinite but actually aren’t. This is the never-ending Web Ring problem where you chase promising sets of hyperlinks that eventually loop around themselves in a giant spiral of mutual attribution.

4. Look in parallel fields for great answers. It is quite likely that your particular horrible problem has been discussed and perhaps even solved in a rather different field of operation. In my experience, teaching is a completely overlooked source of great information.  Take e-moderating for example: 90% of the problems online researchers face have already been researched, written up and peer-reviewed by academics working in teaching and online learning.

5. Find the official resources. Government sources are excellent and entirely free.  You can find entire datasets this way.

6. Identify the star papers and books as quickly as you can. In any field, there are some essential pieces of reading.  In a brand new field, one of the most useful things you can find is the high-level textbook.  These are your Little Helpers. Love them and treat them well.

7. Be flexible, don’t print out in too small a font, and carry paper, pens and a USB stick everywhere.