Unveiling the Unobtrusive: The Rising Power of UICs in Culture Measurement
There are several ways a business might choose to measure its company culture. The fruits of most are confidential to the organization. But a more public, open-source method of measurement is gaining popularity: unobtrusive indicators of ethical culture (UICs) that collect data without engaging employees and require no special access or permission. Since these are publicly available, businesses might as well start collecting and using the information for their own benefit. The tools are out there for the asking, and negative data can, and probably will, be used against them.
Unobtrusive indicators of culture include:
- Social media (company messaging, customer interactions, etc.)
- Company websites and blogs
- Online reviews (from employees and customers)
- Consumer complaints
- Company policies/mission/code of conduct
- News articles
- Public company reports and filings (e.g., annual reports)
- Executive data (work and non-work activities of CEOs, etc.)
- Expert analysis or opinion (e.g., interviews, blogs, surveys)
- Legal data (e.g., suits brought against the company)
- Photos or other observable evidence of the work environment/setting
- Awards and accolades (e.g., Fortune 100 Best Places to Work)
- Demographic data on employees and leadership (as indicators of diversity, etc.)
- Political affiliations or activities of executives
With business ethics and corporate reputations under growing public scrutiny, noninvasive UICs can provide invaluable insights into a company’s cultural and ethical state. Nonetheless, some 39 percent of companies are not conducting culture assessments; most that do are making little of the results. Amid so much present concern about the tone and values emanating from company executives, efforts to assess their contribution to organizational culture may be driven by research occurring outside their organizations. This puts at risk the very people who are displaying so little interest in pursuing corporate culture assessment.
So we’ve been thinking a lot about UICs. How might they best be used to measure the ethical culture of a workplace? What research is needed to ensure their validity (accurate measurement and inferences)? If we conceptualize UICs as a suite of outputs flowing from the organization being assessed, they can presumably tell us something about its culture. A strength of UICs is that the outputs (impacts) they capture may be more tangible and even more important than avowals of organizational intent or the culture employees are experiencing firsthand. And because traditional culture assessment surveys bear such problems as reactivity, survey fatigue, bias, and manipulation, UICs can be an attractive addition to the toolkit of culture assessment.
A growing amount of available data and the integration of artificial intelligence (AI) into the analysis of UICs are further revolutionizing this field by offering real-time monitoring, advanced data analysis, and predictive modeling capabilities. The enhancements to speed (and potentially, accuracy) enabled by number crunching will make it much easier to use UICs. AI-driven processes will enable the concoction of increasingly complex composite scores that might contain dozens—even hundreds or thousands—of factors, rendering their interpretation increasingly difficult. It is also likely that AI will be used to identify additional UICs. All of this points to the importance of monitoring black box vs. explainable machine learning in order to avoid overreliance on AI to guide decisions.
The specificity of information is also sharpening due to these technologies. Such things as customer sentiment will be easily segmented by region, age, gender, race, and so forth, offering tangible power and risk. A company might want to learn about and improve its reputation within specific segments of interest, particularly if it perceives a problem. At the same time, any disparities could be explored and used by others to highlight problems the company faces—or to disparage it.
While UICs might be used to great effect if properly leveraged, it’s possible that enthusiasm for them is overblown. Potential problems include that UICs may not be able to capture everything we want to measure about culture because they measure via proxies, without access to more direct forms of appraisal. There are a number of issues regarding the validity of the approach (i.e., unobtrusive measures may not agree with internal measurements—or reality), so incorrect conclusions might be drawn. There is also risk that businesses will lose control of transparent material they provide, or that incorrect assessments will be formed externally.
Organizations are wondering what the future holds for UICs and how they can best leverage these tools to improve their organizational culture and reputation. At the same time, they are concerned about how others will use the data in ways that could impact their image or efforts.
First, they must consider whether UICs are a valid measure of culture. Evidence is emerging that they are, at least to some extent, a viable tool for understanding and managing culture.1,2,3 While much more work needs to be done, what seems certain is that organizations should prepare to address potential challenges that will arise in the face of extensive efforts to implement UICs.
As research quickens, there is a lot of room for potentially flawed research and analysis, and there will likely be much questionable deployment of UICs that is not backed by solid research. Specifically, we will probably encounter poor methodology in the coding and analysis of otherwise valid UICs, leading to superficially valid but practically meaningless results. Improper generalization is likely to be a problem, too, whereby results from one organization are applied to all organizations or applied across industries without evidence that such conclusions are transferable.
The best way to measure the validity of UICs is to measure a given company from the inside and the outside and then compare the results. This has been done very little, perhaps because organizations are typically reluctant to share internal culture data. The idea of embracing external measures beyond their control, even if valid, may also concern them. The benefits of doing so might get them over that hurdle, eventually.
If such internal-external research were to find that certain UICs are not valid and should not be used to judge the company from an external perspective, conducting this research would provide the organization protection. If they were deemed valid, the UICs could then be used effectively to gauge its culture from the outside. This would add a tool to the reputation management kit.
Beyond questions of culture, many UICs might inform the company’s efforts in areas of marketing, recruitment, reputation management, and attractiveness to investors. The external outputs might also indicate something about how the company’s activities tangibly impact the world. After all, a company with a “good” culture and negative impacts poses a contradiction of sorts; it might suggest that its culture is only superficially good, or good only for those on the inside.
The combination of peering inside a company without the need for access and the idea that external impacts are important, no matter where they come from, will likely drive continuing expansion of UICs for various purposes. In the near future, many companies will be viewing UICs in a dashboard that provides a snapshot of how an organization is viewed from the outside, and they will manage their impacts and reputation accordingly. Some companies may already do this. Meanwhile, investors will use comparable data to assess risk and likely success. And some companies will provide UIC-based ratings to help guide customers with regard to the ethics of purchases.
Subscribe to the Ethical Systems newsletter
Now, let’s look at a recent study that examined photos of the workplace as artifacts of culture. It concluded that artifacts coded from photos, but not represented much in interviews, “indicate deep semi-conscious aspects of culture that employees’ experience but do not think about when answering questions about culture.” Once trained to interpret photos, an AI program might be able to recognize how the work setting relates to culture—and how it impacts people. Could this help identify workplaces that are toxic? Probably not. But it might identify those that are more positive and supportive of employees in some ways, or scope out other general characteristics of the organization’s culture. Still, even if such measures seem useful, is this artifact of culture truly useful in making important decisions? More than likely, this sort of measure will be considered for use in composite scores and will win respect only if it proves predictive of performance. That could affect company behavior, such as removing photographs of office interiors from websites and Google Maps to avoid analysis. UIC researchers are creative, however, and might label an absence of photos as a UIC in itself.
UICs may be particularly useful in the context of risk management because it could eliminate some problems associated with self-reported surveys and the need to trust organizations to be transparent. In companies with poor culture, self-reporting tends to be more skewed and less reliable than in those with better cultures. When thinking of risk as coming from bad cultures or collective deception, UICs might become a useful tool to identify those risks. An example of UICs as indicators of risk conducted in the banking industry found strong relationships between culture UICs and financial risk. This is an example of how UICs can be used to identify risks that are not easily identified by other means, and shows how important these unobtrusive metrics might become to companies, investors, and regulators.
Indeed, the use of UICs could possibly have prevented the ethical scandal that surfaced at Wells Fargo in 2016. The bank’s employees had opened millions of unauthorized accounts without customers’ knowledge because company culture pushed them to do so. It is likely that this issue turned up in customer complaints or the comments of former employees before the problem cost Wells Fargo $3.7 billion. And skepticism about the bank’s seeming success might have surfaced via aggregated monitoring of media commentary. In such cases, early detection of a problem is paramount. Investors could divest if they see a problem looming; the company might even prevent a scandal from going public by recognizing a problem and addressing it swiftly.
An ethical dilemma might thus arise. When external measures such as UICs indicate a problem that internal ones do not, is the company obligated to act? By analogy, some energy companies (e.g., ConocoPhillips, ExxonMobile) with high ESG scores may be polluting as much as those with low ESG scores (also unobtrusive measures). These companies are checking ESG boxes, but their culture is not leading to cleaner energy production. Energy companies shouldn’t be judged by unrealistic standards or punished simply by being in the sector; at the same time, only companies in the energy sector can be expected to lead in producing clean energy.
For UICs, a parallel situation may occur. They might indicate a good culture but not necessarily good performance. Or the UICs could be used to make incorrect conclusions. The UICS might even signify correctly but in ways that are not relevant (material) to the company’s operations. This again highlights why an organization may want to conduct its own research in order to learn which UICs really matter for their organization and industry—and which can be safely ignored. In addition, they would have ammunition for rejecting findings from certain UICs in the future if their research has shown they are not relevant.
It’s important to recognize that because it is publicly available information, there is nothing to stop the creation and dissemination of UICs on topics of interest. These could include such predictable topics as pay, equality, and treatment of employees, as well as more controversial topics like the environment, politics, and religion. Some investors are already using ESG-type scores derived from UICs to inform investment decisions, while some customers are tuning into providers of scores that tell them how to shop in a way that supports their values. It might be fair to say that the use of UICs is an inevitable part of technological advancement. It must be emphasized that their quality assurance depends on adequate research to establish accuracy. How people choose to use them is another problem altogether.
The use of UICs can be part of a systems approach to ethical cultures. In other words, the attention paid to UICs is likely to change the behavior of organizations because they know these measures are being widely used. For this reason, boundaries are probably needed to determine that attention is not misdirected to unimportant matters or used to unfairly impact companies.
For example, it may be inappropriate to make use of UICs that rely on metrics such as politics or how executives behave outside the workplace. Even if they hold predictive power, they may violate rights to privacy or invite discrimination based on religion, beliefs, or values. Other measures should also be appropriate to industry and realistic expectations. As with problems we’ve seen regarding social media and “cancel culture,” it may not be reasonable to use UICs about women’s reproductive rights to make judgments about, say, an automaker. Yet at the same time, consumers and investors have the right to use this information in making decisions about deploying their money. The boundaries are unclear, just as they have been with regard to ESG ratings and certain other metrics. (What exactly is a responsible business responsible for?)
Some internal boundaries may be important. Organizations will need to decide what UICs to use and which they will not. Reputation dashboards could harm the organization by promoting excessive concern about reputation or causing it to focus overly much on pleasing certain groups. The challenges are to find the right balance between monitoring and managing reputation and then balancing the goals of the organization with its external impacts. It is difficult to know when reputation management benefits the business versus when it is fruitless or even damaging.
Organizations that conduct or participate in the validation studies that are needed to compare UICs to internal culture data may gain the advantage of being viewed as transparent while learning which external data may contribute value to their goals. The validity of UICs is likely to vary by industry and organization, adding benefit to being directly involved in such research. Participation can be effected by working with a third party to collect data or by conducting an internal study. Ideally, to ensure that efforts to use UICs are efficient and effective, many organizations will create some kind of program to assess UICs and will share results with the public and one another. Others will hold such research close, disclosing it only when necessary to defend against attacks on their reputation.
A final ethical concern is that greater use of UICs will lead to gaming the system. External actors may use them to attack organizations or manipulate them. A company might take steps to ensure that its UICs look good, even if its internal workings are not truly ethical. Research on UICs should (and often does) take this into account by assessing their quality according to how “gameable” they might be.
Ethical Systems is currently seeking business partners for UIC validation studies that compare internal culture data to a collection of relevant UICs. If you are interested in collaborating, please contact us. As a nonprofit organization, we are committed to helping organizations become more ethical by providing high-quality and unbiased research services.