30 Days of Ecommerce Testing Day 28: Ethics in Ecommerce

Day 28 of 30 days of ecommerce testing is:

Identify potential ethical issues with the use of ecommerce data. What are some public examples?

I started by reading this post https://www.nchannel.com/blog/ethical-issues-in-ecommerce/ which seemed very US focussed in the beginning, but they broaden the information throughout the article to cover other areas and regulations.

My first thought for a public example of a data breach (so an ethics breach too) is the Equifax one

which may not directly affect ecommerce sites, but it is something that could


In Germany we have an organisation called “Schufa” (protection association for general credit insurance) - i think it’s an equivalent to american “Equifax”.
If you want an offer for a credit and ask your bank, the bank will send data to this organisation and checks your financial standing. Maybe that this request effects also your score at “Schufa” - noone knows ;-). It’s almost never possible to reach 100%, even if you have or had no credits.
Another aspect of this score is where you reside. In my opinion it’s unethical.


Though i was not able to find out an example where ethics in ecommerce was not followed, i was able to search and find an excellent mindmap on ethics in ecommerce


Please do have a look


Potential issues I could find or think about:

  • lack of care in protecting customer data from external parties (mainly through sloppy security)
  • sharing data with third parties without informed consent of the user
  • collecting and processing customer data which is not related to the business including data mining
  • collecting and processing sensitive customer data (data consisting of racial or ethnic origin, political opinions, religious or philosophical beliefs, or trade union membership, genetic data, biometric data, data concerning health or data concerning a natural person’s sex life or sexual orientation)
  • processing business data of other legal persons (think of all the free cloud services that companies use)
  • data ownership (like T&C where your reviews or photos can be used for the platform’s own advertising without specific and express consent and for free)
  • differential pricing, which can lead to price discrimination (e.g. http://www.cbs46.com/story/27923981/cbs-investigates-online-shopping-disparities)
  • misleading or fraudulent product descriptions (including plain counterfeit products)
  • misleading or fraudulent advertising (including astroturfing and covert social media marketing)
  • freedom of choice when recommendation/comparison engine or search results are “weighted”
  • generally tailoring information and creating “filter bubbles” (not just for news but e.g. “health” products or even dangerous materials)
  • circumventing legal obstacles (e.g. health and safety standards or intellectual property rights)
  • malware delivery in digital goods (from Sony’s rootkit to ads serving Trojan horses)
  • limiting accessibility due to DRM in digital goods

And from the customer’s side:

  • C2C business issues (payment and, again, product fraud)
  • general payment fraud
  • misleading product reviews (including organized smear or boost campaigns)

Using cookies is also an issue. Tracking everything user does and how they navigate- sometimes companies use it for their purpose in the name of “personalising the experience”.
Some websites also put too much mental pressure on the user to purchase like booking.com


Day 28 https://wp.me/p9EXXo-6c


To Twitter:

A lot has been said in this thread already, so I think I found an article that covers slightly different issues: Data Mining: Where Legality and Ethics Rarely Meet

Here’s a scary quote:

“What’s alarming to me isn’t the strategies companies are applying for their own benefit, but the large, large companies forming data alliances for someone else’s benefit,” says Allen Nance, president of e-mail marketing communications and CRM firm Mansell Group in Atlanta.

The gist of the article is that we are not only tracking user behavior for company A or company B, but that company A and B along with data from company (or government organization) C are being used to create “psychographic profiles of people’s behaviors and habits.”

And this is not illegal, but is it ethical? You may have permission from A and B (implicit or explicit) but the consumer has no say in whether these profiles can be merged to create an online persona against their knowledge.

Opinion: This is where testing overlaps with data science/data mining, and I think this is becoming more of an issue, which we need to look at closely. There may be a place for testers to take part in helping create/maintain a data science code of ethics.

-Dave K