<![CDATA[Tim Hoad - Blog]]>Sat, 23 Sep 2017 20:01:03 -0800Weebly<![CDATA[Becoming a better interviewer by interviewing]]>Wed, 16 Jan 2013 01:28:20 GMThttp://hoad.id.au/1/post/2013/01/my-experience-as-an-interview-candidate.htmlI've been around in the software industry for a while, and I've always been involved in interviewing and recruiting. It was only recently, however, that I was reminded of what it's like to be on the other side of the interview process. Identifying and landing great candidates is really hard - I typically interview about 10-20 candidates for each offer we make. One of the most frustrating things is when we find a great candidate and offer a position, only to have them turn us down. I wanted to take the opportunity to see what could be learned from looking at the process from the candidate's perspective. I made several observations that I think can help both in identifying the right candidates, and making sure that we land them.
After completing my undergraduate degree, I spent six years at Telstra Research Labs (TRL), where I was a member of the graduate recruiting committee. There, I learned the basics of hiring and interviewing. Telstra was a big proponent of behavioral interviewing, so I became quite practiced at asking behavioral questions and assessing candidates' responses.

After relocating to the US, I worked for seven years at Microsoft, in MSN Search, Live Search, and Bing. I started there as an individual contributor, and took the first opportunity to get involved with recruiting again. Like most of the major software companies in the US, Microsoft uses a series of interviews to evaluate candidates. These interviews include both behavioral and technical questions, though the emphasis is on technical assessment for science and engineering roles. I participated in these interview loops, and developed my skills in technical evaluation. As a manager at Bing, a key part of my role was to interview candidates and convince them to come and work with us. I did dozens of interviews during this time, and became quite adept at evaluating potential candidates. 

Ironically, my experience as a candidate was minimal. When I was an undergraduate in Australia in 2000, hiring decisions at most of the big companies were heavily reliant on academic results and performance in personality, aptitude, and IQ tests. The actual interviews were generally quite superficial, and rarely technical. When I joined Microsoft, I (naively) decided to accept their offer without interviewing anywhere else, so my experience in 2005 was limited too.

In mid-2012, I decided to start looking for a new role outside Microsoft. It many ways, it was my first real experience as a job-hunter in the software industry. I wanted to take my time and do it right. I was organized and disciplined. I started by making a list of the companies I was interested in. For the most part, this list consisted of the big tech companies (Google, Facebook, Amazon, etc.) as well as a couple of somewhat smaller companies that I had personal contacts with (LinkedIn and Twitter, for example). I prepared my resume, and reached out to each company either by contacting recruiters that I'd spoken with in the past, or by contacting former colleagues. I told each of these companies that I was looking for leadership opportunities in either Seattle or the Bay Area, and asked if they had anything that was a good fit given my background. From there, my experiences with these companies began to diverge. I won't cover all of them, but there are a few things that really stand out.

Let's start with the worst experience first. I spoke with a talent sourcer at one company who liked the look of my resume, and told me that he'd handed it off to a hiring manager. The hiring manager contacted me about a week later and gave me a job description for an IC (individual contributor) analyst role in Chicago. I reiterated my interest in engineering leadership roles on the West Coast, and declined the hiring manager's offer to interview. I never heard back. Perhaps this company didn't have anything that was a good fit, but with more than 400 open engineering positions publicly advertised at the time, I think that's unlikely. This leads to two lessons. First, don't match a candidate with a position that is clearly unsuitable - it just wastes everybody's time. Second, if a good candidate declines to interview for a particular role, take the time to match them up with something that's a better fit - they might end up being a great match and a great hire.

At another company, the process started very differently. The sourcers matched me up with two roles that I thought could be interesting. Before I began the official process, I had informal meetings with the two hiring managers. This was a really good opportunity for the hiring manager to get me excited about the role, as well as giving them a chance to ensure that I wasn't going to waste their time. The learning here: connect the candidate and the hiring manager as early as possible, particularly if the candidate has skills or experience that is likely to be in high demand.

After meeting with the hiring managers, I went forward with the "official" process, which is where things started to turn sour. First, I was asked to nominate four dates for my onsite interview and keep these available until the interview could be scheduled. This might be fine for an undergraduate, but as a full time engineer and a manager of a development team, this is an awfully big request. To find a single free day on my calendar, I usually have to look 4-6 weeks ahead. Even with a month's notice, finding four days requires significant shuffling. Second, I was required to complete a "writing exercise", which consisted of a 3-4 page essay that was intended to assess my communication skills. The essay took several hours to complete, and is something that a good recruiter should be able to assess from all the email communication that happens throughout the process. Perhaps this process saves the panel some time, but as a candidate it felt like "jumping through hoops". Often during the interview process, we're concerned with the amount of time we spend we have to invest, but we don't put the same value on the candidate's time. If we want the candidate to have a good impression, we have to respect their time as much as our own.

The ultimate outcome with this company was that they decided not to extend an offer. That's fine - we all get rejections sometimes, and as a hiring manager I appreciate the need to err on the side of "No Hire". As a candidate though, I'd invested a great deal of time. Informal interviews, an essay, discussions with recruiters, phone interviews, onsite interviews, and all the preparation added up to more than 30 hours. With that kind of investment, I wanted to make sure I learned something, so I asked the recruiter for some feedback, hoping that I'd get some idea of where I need to improve. Did I mess up on the technical questions? Was my experience was not relevant enough? Was I not sufficiently passionate about the product? The recruiter told me that their policy was not to give any feedback. Nada. Zip. For legal reasons, employers need to be careful about what they share, but zero is going too far. I walked away feeling like I'd completely wasted my time. You might wonder why they should care, after all they decided not to make an offer. There is a very good reason: if you know me well, chances are I've told you about my experience, and that you shouldn't bother to interview at this company. If every unsuccessful candidate shares this kind of experience with their friends and colleagues, this company will find it harder and harder to find great applicants.

The final company that I'll mention is EBay. More than any other company, my experience interviewing with EBay left me excited, valued, and eager to learn more. There were many things that EBay did that contributed to this, but there are three things that stood out. First, they moved fast. All of my emails and phone calls were answered the same day. Interviews were scheduled on short notice. My recruiter called me the day after my onsite interview to extend a verbal offer. When a hiring team is discussing whether to offer a position to an applicant, there is often a sense of urgency if the candidate is interviewing with other companies. The reason for this may not be obvious. You may think that if you take too long, the candidate might accept another offer before you can get back to them, but I suspect rarely happens. The majority of candidates won't accept anything until they've heard from all the companies they're seriously considering, even if it means stalling for a week or two. The real value of moving fast is the perception that it gives the candidate. Candidates want to join a team where they're going to be valued. Making an offer quickly gives the impression that there is no doubt about the outcome, and the team is really eager to have them join. Playing hard-to-get doesn't work.

The second thing that EBay did well was to communicate openly. They were transparent about the process, and gave me feedback from the interviews. The feedback wasn't detailed, but it was enough that I felt that I'd learned something, regardless of whether I ended up with an offer. They were candid about the pros and cons of the offer compared to what was available elsewhere. They were up front about the challenges in the team I was joining, and didn't try to hide anything that wasn't completely rosy. Other companies were unclear about what the role entailed, coy about the current state of the team, or, in one case, even secretive about what the product was. EBay’s honesty gave me confidence that I knew what I was getting in to, and made me much more comfortable about accepting the offer when the time came.

The last thing that EBay excelled at was following up after the verbal offer. I had communications from EBay on an almost daily basis from the time that the verbal offer was made until I had signed the contract. Some of this was from my recruiter, some from the hiring manager, and some from the VP, but in every communication they were patient and communicative. They gave me opportunities to ask every question I could possibly think of, and gave detailed responses and had deep discussions about every aspect of the role. Getting the candidate excited about the position is a great first step, but keeping the candidate's interest all the way through is critical.

It’s easy to get caught up in the hiring process and forget about the experience from the candidate’s perspective. Stepping back and putting yourself in the shoes of the candidate will make you more successful as an interviewer and will help you land the best people. Above everything, it’s critical to make sure that the candidate feels valued – a candidate who feels that an employer puts less value on them than on their employees is going to be almost impossible to land.
]]>
<![CDATA[Making judgment calls]]>Sun, 21 Oct 2012 17:56:47 GMThttp://hoad.id.au/1/post/2012/10/making-judgment-calls.htmlI’m currently leading a team at Bing called “Whole Page Organization”. We are responsible for a range of features that are displayed in the web search results page. One of the key common threads between the features that we build is that they are centered around understanding heterogeneous data coming from various backend services, and they are intended to add a level of cohesiveness and richness to the experience for our users. We are a very data driven team. Whenever we build a new feature, or make non-trivial changes to an existing feature, we strive to measure and understand the change as deeply as we can. Sometimes, however, we need to make a decision that conflicts with what the data tells us.
Whenever we test a new feature, we measure it using two primary approaches: offline metrics, and online metrics. Offline metrics are human judged, usually by trained editorial staff. These metrics range from very targeted tasks, such as identifying specific classes of defects, to very general tasks, where judges might be asked two subjectively rate the whole page. Offline metrics are primarily used to gain insight in to the quality of specific algorithms or data sets.

Online metrics are built on user interaction data. We collect detailed information about how our users interact with every part of the page, and compute aggregate statistics on page click rates, user sessions, dwell times, and so on. Online metrics are generally used to assist in understanding the overall success of a feature.

Both of these classes of metrics are indispensable. They provide rich data to assist in making decisions, and often provide unambiguous outcomes. In some cases, however, the data can be inconclusive. Each of the metrics in both of these classes give insight in to very specific aspects of a feature. Sometimes the “big picture” is unclear due to conflicting metrics, or weak signals. Some metrics, such as “sessions per unique user” (a measure of how many times our users visit the site) aim to provide an overarching view, but these usually have very low resolution and are subject to a great deal of noise. They are rarely sensitive enough to provide a clear and statistically significant signal.

A recent feature that my team as built is called “people aggregation” (see figure below). The concept is that when a user is searching for a person on the web, we will group together results that we know are related to a specific individual. Many names, such as “Danny Sullivan” are ambiguous—there are multiple individuals with that name that users are likely to be looking for. By grouping the results about each person, the user can more easily distinguish between results about the person they’re searching for, and results that are unrelated. In the current implementation, the original placement of the results is left untouched, so the grouped results are duplicated on the page, albeit with a different presentation. After extensive measurement and experimentation, we shipped the feature, since we saw a slight improvement in user engagement on the page when our feature was shown.
Picture
Once we had shipped we saw a potential problem. In some cases, such as “Lady Gaga”, a name query is not ambiguous, and all of the results on the page relate to a single person. When we group the results in these cases, we’re generally not adding any value, since we’re just duplicating the first few results and not making it substantially easier to find the right documents.

When we saw this, we went straight to the data. We hypothesized that users would not be engaging much with the feature when the query is unambiguous, and the gains that we saw in our experiments were coming from ambiguous queries. We divided the data into two sets: one where we were grouping results only within the top 5 ranked documents (unambiguous queries), and one where the grouped documents were originally more spread out on the page (ambiguous queries). What we saw was surprising—the difference in engagement was marginal.

So were faced with the decision of what to do about this situation. In these unambiguous queries, the feature added weight to the page (which has an impact on load time) and intuitively it did not add substantial value, however the data was not clearly supporting our intuition. There were several courses of action we considered. First, we could leave the feature as it was. Second, we could disable the feature for these unambiguous queries. Third, we could turn off the feature entirely. Fourth, we could adjust the behavior of the feature so that it would intuitively add value by removing the original results from the page. The last option would reduce the page weight even more than turning the feature off, and make the load time faster without losing good results from the page.

Turning off the feature entirely seemed like an overreaction. We had data to show that people aggregation had some small positive impact, and there were cases where we felt that it was useful. We eliminated this option first.

Redesigning the feature had some promise, but what we were considering was a risky change. We would have to carefully analyze the impact. We had occasionally done similar things in the past, and we knew that it would be difficult to predict what impact it would have. This was a course of action that would take some time (possibly months). In the meantime, we had a feature in production that we were not happy with. We needed to fix the problem sooner than that, so we shelved this option as an area for future experimentation.

That left us with the first two options: leave it alone; or disable it for unambiguous queries. We had no real data to guide the decision. At this point we went back to think about the design goals that we had for this feature, and consider how the current behavior fit with our future plans.

One of the key design goals that we had was to organize the results on the page around entities (that is, individual people) so that users could more easily identify results that were related to the person they were searching for. Since there was only one person represented on the page in these problematic cases, we were not making it easier for users to find the right documents. It was clear that these cases did not align with our design goals.

Our future plans for the feature included sevral ideas related to variations in the presentation of the groups. These concepts did not lend themselves well to cases where the results page is dominated by a single entity.

After considering the initial design goals and future plans, we made the decision to disable the people aggregation feature for queries where the results that were being grouped were already in the top few documents ranked on the page. Sometimes the right thing for the user, and the product as a whole, is not what the data tells us.

]]>
<![CDATA[Fresh new website up and running!]]>Sun, 02 Sep 2012 23:40:09 GMThttp://hoad.id.au/1/post/2012/09/first-post.htmlMy old site was so outdated and difficult to maintain, that I've decided to kill it and start from scratch. So out with the old, and in with the new. I'm not sure how active I'll be on here, but my current intention is to add a new rant to the site every couple of months.

Let's see how it goes.]]>