If you’re like me, you’d probably like to believe that the quality of your data science work will stand on its own, and speak for itself.
Whether people like you as a person…that’s maybe a secondary concern, at best.
I personally really want to believe this because (a) I’m heavily introverted, and (b) that would free me up to focus more on developing technical skills – skills that I can see being more concretely relevant to data science or machine learning.
However, I do think the role of emotional intelligence (or soft skills) is increasingly becoming important long-term if you want to do well as a data scientist…regardless of whether you’re a manager or individual contributor.
I really want to resist this…but I’m slowly coming around to accepting it. Reluctantly.
It’s way easier to move past a ‘bad’ mistake if people like you
If you’re doing hardcore data science, there’s a good chance you’re hanging out at the cutting edge – whether you’re the first to find a precursor of a new contextual insight, or maybe you’re the first one to even look at these datasets in this combination.
In any case, you’re probably spending a lot of your time in uncharted territory.
With that in mind, you will eventually make a mistake in front of the end-user: maybe you find out an ‘insight’ was actually a correlation that’s already well-known to the business, you spend a week digging into an ‘interesting anomaly’ that ended up being nothing more than a pretty basic data error, you present a graph that is implying an obviously impossible situation.
This stuff will happen if you’re hanging out at the cutting edge.
Put another way: if you’re not making mistakes, then you’re probably not trying.
Some of your mistakes will be blatantly obvious to end-users – maybe more than they’re letting on. When that happens, they’ll have a split-second decision to make – is this person incompetent…or is this a really difficult problem and growing pains were always expected?
You might be shocked at how starkly different an end-user’s reaction is to a data science error – with the variance based largely on whether they essentially liked the data scientist or not.
Of course, being likable doesn’t mean you can deliver garbage and expect the user to be happy. I’m more saying that gray areas are way more common than it might seem, where there’s a thin line between something seeming like solid analysis, vs substandard.
People will give you more benefit of the doubt
Similar to above, chances are you’re never quite getting crystal clear direction from your client or user about what exactly they want to accomplish.
They probably have an idea (of varying vagueness) of what they’re looking for, or what rough hypotheses they’d like you to check – but unless you’re a very junior data scientist (so pretty much a data analyst at that point), you are going to be receiving ambiguous direction.
With ambiguous direction comes a lot of responsibility – and sometimes ambiguous results.
More specifically, it’s more likely that you’ll decide upon an approach that the end-user probably wouldn’t have been 100% onboard with…but again, they have a ton of wiggle room here for how they react and value your work.
If you have previously invested in building trust and establishing clear communication with them, in addition to showing some occasional uncertainty once in a while, they’re way more likely to trust you when you make an informal recommendation of what approach to take.
If they don’t like you, for whatever reason, and then you make a mistake…there’s a good chance they’ll be getting a second opinion, and maybe permanently.
Iterative user feedback is becoming more important – and friction is a killer
A great way to avoid catastrophic mistakes with your end-users: catch the mistake while it’s tiny, and very little time (or reputation) has been implicitly staked on a mistaken belief.
If you’re operating from a mindset of “leave me alone, I’ll do this hardcore analysis and then get back to you,” you run a high risk of incurring large, unforgivable mistakes – that were entirely preventable.
Alternatively, if you make it a point to get frequent, iterative feedback from your end-users, it’s way less likely that small mistakes would ever get the chance to morph into big ones.
Another thing about getting useful iterative feedback: your users have to like you if you want quality feedback.
You’re looking at multiple informal meetings (when they could have been spending time on other things), and if the user knows you get defensive about every little perceived slight – they’re just not going to tell you when you’re maybe straying off the path. But they’ll tell someone else, and you could be off the project.
It’s way easier being a data scientist if your users like you. Now, how to actually get them to like you is a topic for another article (or series)…
The views expressed on this site are my own and do not represent the views of any current or former employer or client.