Subscribe to our blog



On the Trump/Pence media survey, ‘Dark Patterns’, UX and ethics

Warning: this article contains strong personal and professional opinion, plus some survey design talk!

The Trump Make America Great Again Committee has a ‘Mainstream Media Accountability Survey’ at: https://action.donaldjtrump.com/mainstream-media-accountability-survey/

Go take a look; in fact, I urge you to take it before reading on.

As a User Experience professional, it made me laugh; it made me cry. As should be clear to you, this is a #fakesurvey.

It starts with a couple of simple questions; these reduce the barrier to getting started – a good design approach as, in a poll, the first click is the hardest to get.

But it then employs a number of techniques which completely invalidate it as a survey. These techniques are not just poor design, they are ‘dark patterns’, whose definition at darkpatterns.org is: ‘tricks used in websites and apps that make you buy or sign up for things that you didn’t mean to.

Here are some of the most obvious dark patterns used in this survey.

 

  1. ‘YES! YES!! YES!!!’

If you are a POTUS supporter, you would click ‘yes’ to almost every question. The survey designer is getting you into a ‘Yes, Yes, Yes’ mode which stops you thinking – the basic trick of any salesperson – invalidating this as a legitimate survey.

And that’s not even considering ‘Acquiescence bias’ – the tendency to agree with statements in a survey (and therefore the reason why such a format is to be used sparingly, and with a balance of positively and negatively keyed items.

 

  1. Limited options and using text boxes for unwanted responses

A number of questions have severely limited response options. Here’s Question 6:

Question 6: Which television source do you primarily get your news from?

This list is reasonable; these 3 cable news channels have the highest viewership (Fox News was the most-watched cable news channel in 2016). But why not list one or two of the next-most-popular services; and why is there no ‘other’ option?

Now, to be fair, Question 7 is a follow-up:

TPS Q7

But writing in a text box is more effort than clicking a radio button. In addition, our recognition memory is better than our recall, so absent options are less likely to be recalled. So any significant answer that requires recall and writing in a text box will be under-represented.

Here’s where I got even more twisted and cynical and asked myself ‘Why is this a separate question, rather than an ‘Other’ option on Question 6?’

The fact that this is a separate question means that the results of Question 6 can be reported without any reference to question 7. If a respondent’s primary source was NOT listed in question 6, they might report it in Question 7. But Question 7 is not about your ‘primary’ TV news source. So the respondent’s primary news source gets bundled in with responses from people who list their ‘other’ sources.

It also means the researchers can report that “XX% of respondents said that their primary TV news source was [Fox News, CNN, MSNBC or Local news]” without any reference to those whose primary source was none of those – because their answers went against Question 7, not 6. So the figures for Question 6 will be higher than they should be because they’ve not been diluted with other valid responses.

 

  1. The Dear Leader, not the issues

In case you forget who you are supposed to be supporting, the questions make frequent mention of ‘Trump’s presidency’, ‘President Trump’s executive order’, ‘the Republican Party’ etc. This survey is not asking you about the issues, it’s asking you to reaffirm your faith in  your leaders.

 

  1. Leading questions

Almost every question is a ‘leading question’; it’s very clear what viewpoint you are expected to agree with. Here’s question 14:

TPS Q14

The important part of this question is ‘contrary to what the media says.’ Again, there are also no checks and balances on these types of questions.

As Lou Rosenfeld tweeted about this survey:

TPS Lou Tweet

 

  1. Loaded questions

Question 5 actually made me laugh – a high point in this UX Professional’s experience of this survey:

TPS Q5

This reminded me of the classic loaded question example ‘When did you stop beating your wife?’ (e.g. quoted in Wikipedia’s ‘Loaded question’ article). Difficult to give a reasonable answer. I did ask myself ‘Which question in this survey does the worst job of abusing valid scientific process and misrepresenting society?’ Question 5 is a strong contender.

 

  1. Complexity

With the survey designer’s obvious need to include a reference to authority, to state the viewpoint with which you are supposed to agree and to incorporate biased language designed to help you keep saying ‘yes, yes, yes’, some of the questions get awfully convoluted. Here’s question 11:

TPS Q11

Keep the questions simple is really rule #1 of survey design. It’s quite possible that the respondents who have got this far won’t even understand some of these convoluted, even tortuous, questions. But, then again, the crux of the question is in the first few words, so respondents don’t have to think or even read beyond ‘Do you believe that the media unfairly reported on President Trump…’ to answer ‘Yes!’.

This complexity, allied with the leading early part of the question reduces the ability and the need of the respondent to think; it maintains the ‘yes, yes, yes’ mindless clicking.

 

  1. Lack of clarity

Question 15:

TPS Q15

OK – people of which faith? Christians? Muslims? Jews? Hindus? Sikhs? People of all faiths? People with faith in authority? In science? I hypothesize that this question will get a huge ‘Yes’ score.

 

  1. Propaganda

Well, yes, the entire ‘survey’ is propaganda, but there is one example clearer than most; Question 12:

TPS Q12

Here’s an exception to the ‘yes, yes, yes’ flow. You might reply ‘yes’ (and may be thinking “I was aware, but obviously a lot of people weren’t, and I’m angry about that.”). Or you might reply ‘No’ (and may be thinking “But now you’ve brought it to my attention I’m really angry.”) Was there actually a poll? Was it a real poll or a #fakepoll? We’ll never know. And as a respondent we don’t really care, we’re in the flow, clicking away, and on to the next question – and the survey designer doesn’t care either.

Breathe

OK, let’s step back, calm down and think about what we’ve got here.

Professional misconduct and unethical behaviour?

To me, this survey seems like a shocking example of professional misconduct.

The designers deliberately use psychological trickery – ‘dark patterns’ – to collect the data they want, rather than objective data. There is no attempt at ‘validity.’

What’s more interesting is that the designers, working on behalf of the Republican Party, are willing to use this psychological trickery on their own supporters. This seems like extremely cynical and unethical behaviour. But let’s not get into politics!

To anyone in Marketing, Communications, Public Opinion Research or Customer Experience/User Experience research it should be clear that this brings their (our) disciplines into disrepute. Can the designers be dismissed from any relevant professional body for this behaviour or stripped of any titles, credentials or awards? Probably not.

Beyond poor survey design, beyond malign survey design

A bigger concern I have is that the impact of this ‘dark patterns’ survey and the motivations behind it go way beyond bad design and the collection and communication of invalid data.

This cynical abuse of accepted scientific techniques is socially corrosive; it accelerates the erosion of trust in expertise and in the process of science, and perpetuates partisanship. The designers are willing to do that for short-term gain.

But, then again, what’s the purpose of this ‘survey’ exactly? Well, we haven’t quite reached that, so I’d just like to mention a couple more issues:

 

  1. The length of the survey

The survey is long at 25 questions. Research suggests that answers to questions positioned later in a questionnaire will be faster, shorter, and more uniform than answers to questions positioned near the beginning.[1]

This effect is certain to be exaggerated when nearly all the responses are identical. Those respondents who are still hanging in there for the final question have expended some significant effort – they have expressed a degree of commitment by their efforts, and that commitment should be rewarded – and exploited – so…

 

  1. The final question

The last question – Question 25 – is the key one:

TPS Q25

If you’re the committed Republican the designers are screening for (for this isn’t so much a genuine survey, as a screener), you will answer ‘YES we should spend more time and resources on this issue’.

Then you record your vote:

TPS Vote

And you get your reward:

TPS Pay up

So – finally – the truth is out: while this ‘survey’ may produce some (invalid) data that could be used to show support for the viewpoints expressed in it, the main aim of this survey is to extract money from the faithful.

Look whose survey this is:

Fund-raisers

It’s a fund-raising committee. Having started with a few reasonable questions, the survey is designed to remind the faithful of their creed, of what their beliefs should be. The survey is designed to dissuade respondents who disagree with the statements and sentiments presented from completion. It gets respondents who do agree into the flow of ‘Yes, yes, yes’ and then it’s so easy to answer the last question with another ‘yes (we should spend more time and resources)’ – and then you’re asked to put your money where your mouth is.

Excellent business performance?

Consider this: Whoever produced this survey had a business goal: to maximize donations. They’ve probably done a great job here. The extremely cynical, biased and carefully-designed survey probably means that only a relatively small number of participants – those who do not realise they are being coerced, and those with high acquiescence – will get through to the payment page.

But the ‘conversion’ rate is probably pretty high. The survey designers should get a bonus.

Some bad news for society

If this were a one-off it would be bad enough. It suggests we cannot trust any survey without delving into every particular of its design. Ordinary citizens do not have the time, nor the expertise, to fact-check every statement printed or spoken, nor to understand the ‘dark pattern’ techniques being used or abused when the results of a survey like this are reported.

Trust in authority has eroded significantly, perhaps largely as a result of the entirely unprecedented advent of the World Wide Web and social media. There is a social revolution under way, faster and more far-reaching than any that has happened before.

And what about UX, context and ethics?

I have met naïve Human Factors/UX researchers who believed they were champions for the ‘user.’ But, in fact, all UX work takes place within a context – usually a business context. We are not champions of users, we are champions of the organization or context in which our work gets done.

Whether we work for a business, for government or a political party, in education, for a charity or non-profit, we still work within a context with organizational goals, and users’ needs and desires can usually only be met partially, if the organization is to be sustainable.

A silver lining

In this social revolution our conversations, our relationships, our attitudes, our organizations and social cohesion are being torn apart. After every revolution comes a period of chaos: the old structures are torn down, and it takes some time for new ones to emerge. I think we’re just getting into a significant amount of social chaos – hopefully not a ‘reign of terror‘.

A silver lining I see is that this chaos will remind us why we have and need ‘experts’, and that we will (re-)learn who to trust and who not to trust. We are already finding ways to trust each other online rather than formal authorities, in certain situations, even though we are also seeing the downside of that in our filter bubbles, echo chambers and growing partisanship. We will also (re)discover that democracy and civic society rely on much more than voting mechanisms and governmental structures.

None of us have the resources or experience and knowledge to fight every battle, nor to argue authoritatively on every issue. But, as UX professionals, we should be drawing lines to identify, call out and prevent professional incompetence, misconduct and unethical practices.

Just my opinion. Glad you got this far.

[1] Effects of Questionnaire Length on Participation and Indicators of Response Quality in a Web Survey, Galesic, M. and Bosnjak, M, Public Opinion Quarterly (2009) 73 (2): 349-360.

This entry was posted in Design, Online UX research, User experience trends. Bookmark the permalink.

Comments are closed.

Contact

  • +1 613 271 3004 Main

Follow Us    Twitter   LinkedIn

© 2001-2024 Neo Insight. All Rights Reserved   -   Please visit our new website: https://dffrnt.ca/neo-insight-is-now-dffrnt/