Marketers have been tracking user behavior online since the advent of the World Wide Web, but as applications, platforms and tools become increasingly intelligent with the addition of machine learning and AI, more nuanced patterns are available for analysis.
Clearly, marketing is not the only function using behavioral analytics. The techniques are being used elsewhere in organizations such as in cybersecurity and human resources.
As always, the art of the possible results in potential opportunities and risks. When organizations adopt more types of behavioral tracking capabilities, they need to ensure that what they’re doing is responsible from several points of view.
Privacy regulations such as the EU’s General Data Protection Regulation (GDPR) and the California Consumer Privacy Act (CCPA) limit data collection and use. At the heart of both is the consumer’s right to know which companies are collecting data about them, what kind of data they’re collecting, how the data is used by an organization, and with whom that data is shared. Meanwhile, other restrictions and guidelines are emerging, such as facial recognition laws, that can also impact the accuracy of behavioral tracking.
More fundamentally, companies need to do a better job of understanding what they can do with behavioral analytics and what they will or won’t do based on their own values.
The art of the possible is evolving
Rapid technology innovation, particularly in the areas of machine learning and AI, is enabling more sophisticated forms of behavioral analysis at scale. For example, banks and financial institutions traditionally have analyzed customer transactions for cross-selling purposes using structured data. More modernly, they’re combining structured and unstructured data to understand the “whys” in addition to “whats”.
“Now the question is not only how much you spend, but why would you spend that amount of money where you spend it, with whom and why,” said Gustavo Pares, CEO of cognitive services provider NDS Cognitive Labs. “Nowadays, when you use a web app or chatbot, we can know right away a bunch of information [including] your age or other demographic information such as income and the hours you’re connecting to ask for something. At that moment, if we have provided the correct campaign for particular context in time, it’s very powerful.”
There’s the data companies collect about people and third-party data brokers who can bridge information gaps. It’s also possible to infer a person’s age, education and where they come from based on what they say or type when communicating with a chatbot, Pares said.
Nigel Duffy, global artificial intelligence leader at professional services firm EY said targeted advertising and product recommendations, both of which use behavioral information tend to be pretty superficial. More troubling is the collection of psychographic information using social media quizzes and affect detection (understanding a person’s mental state).
“I think there’s some really compelling literature on the potential for affect detection, but my understanding is that the way that’s implemented oftentimes is rather naive. People are drawing inferences that the science doesn’t really support [such as] deciding somebody is a potentially good employee because they’re smiling a lot or deciding that somebody likes your products because they’re smiling a lot,” said Duffy.
There are also HR chatbots available for candidate screening that elicit behavior-revealing responses that provide more insight about a candidate than a CV and cover letter alone. In addition, unlike their human counterparts, chatbots can screen candidates at scale 24/7. From an ROI point of view, the concept is compelling.
On the other hand, in October 2019, Bloomberg Law reported that the Equal Employment Opportunity Commission (EEOC) is investigating at least two cases that allegedly involve unlawful discrimination as the result of algorithm assisted HR-related decisions. In November, the Electronic Privacy Information Center (EPIC) filed a complaint with the Federal Trade Commission (FTC) against recruiting technology provider HireVue. In the complaint, EPIC states, “HireVue’s Business Practices Produce Results that Are Biased, Unproveable, and Not Replicable.” [The FTC doesn’t currently list any FTC actions involving Hirevue]
“People are increasingly questioning themselves about these kinds of applications and what ethical risks might be associated with them,” said EY’s Duffy. “I think that’s an area that both vendors [and] customers should be really taking a long, hard look at in determining what the risks are worth, the benefits and whether the risks are consistent with their values.”
What organizations should consider
Duffy and NDS Cognitive Labs’ Pares agree that companies need to exercise more thought and care regarding technology use and procurement than they may have in the past because of machine learning and AI.
“From the perspective of behavioral analysis, it’s a Greenfield [opportunity],” said Pares. “We could analyze more things than we can imagine right now, and we need to evolve and mature our processes in parallel in terms of how to bring to the table committees from HR, from compliance, from communications.”
The challenge at the C-suite level is understanding the risks associated with AI that are not being captured by existing governance and controls processes.
“I think in general, risk management organizations are not probably engaged in this conversation enough,” said EY’s Duffy. “I think there’s a connection or relationship between either the procurement organization that is buying these technologies or the data science or analytics organization that may be internally building or applying those technologies. There needs to be a relationship between those teams and the risk management functions.”
A wise question to ponder is what that relationship looks like today? Are those discussions happening? Who’s having them? Are they the right people? Do governance and controls need to be updated? Are people asking the right questions? Who should be or is responsible and accountable for asking the right questions? The exercise is not an academic one, it’s a responsible one.
“If you think about the kinds of decisions that we’re often trying to delegate to AI, we have human beings making those decisions today,” said Duffy. “We have governance and controls around the human decision-making process, so the same governance and controls structures can be applied to AI too.”
Some organizations have chief ethics officers that help ensure that these issues are addressed. Others may rely on the CIO, but the scope of potential issues associated with behavioral tracking typically spans several functions that may include some combination of legal, compliance, sales, marketing, HR, and risk management.
Importantly, organizations should ensure that whatever guidelines or rules they put in place are consistent with their corporate values. Duffy said organizations in general are not very mature in terms of being able to effectively implement the values of a company when it comes to AI.
“I think there’s a real gap, and I think there are a few pieces to it. One is the technology community needs to think about these questions a little better. Two, I think there’s an important role for independent review because if you’re not measuring your implementation of your values, then it’s very hard to improve them,” said EY’s Duffy. “If you don’t have a third-party independent review of those measures, then it’s very hard to implement them because it’s hard to hold people accountable for them.”
Behavioral tracking capabilities will continue to become more sophisticated, and as they do, organizational leaders are wise to ask themselves what they will and won’t do based on customer and stakeholder expectations, legal and regulatory requirements and their organization’s stated values. Covering all that ground requires vigilance and discourse among various subject matter experts on an ongoing basis given the dynamic nature of technology innovation, global business competitiveness, stakeholder expectations, laws and regulations.
To learn more about advanced analytics check out these recent articles.
Lisa Morgan is a freelance writer who covers big data and BI for InformationWeek. She has contributed articles, reports, and other types of content to various publications and sites ranging from SD Times to the Economist Intelligent Unit. Frequent areas of coverage include … View Full Bio