In this week’s “Meet the Splunktern” blog series meet Mickey Dang, one of our software engineering interns. Mickey shares his experience as an intern on the Splunk Security Analytics team while living in California.
As someone who grew up around Toronto, describing winter is a mixed bag. If I’m talking to someone from the west coast, the slightly defensive response is to admire the beauty of a snowfall and the sight of glittering icicles. To everyone else, I’ll include the part about trekking through blinding blizzards, barely witnessing daylight, and shoveling grey slush.
When I was given the opportunity, it wasn’t a particularly hard decision to spend my winter interning in San Francisco as a backend engineer on the Splunk Security Analytics team. I was somewhat familiar with Splunk products at the time; I had used their log tracing software before to debug applications during previous software development internships.
It quickly became evident that sunshine and hiking trails were not the only appeal over the course of my winter term. There were plenty of enjoyable moments debating back and forth with coworkers over proposed software designs. Often after a prolonged discussion, I would be amazed at the elegance of their approach; sometimes though, one of my ideas would stick and become a meaningful part of the implementation.
From prior experience with various internships at different companies, it’s common to find the fulfilment of work from learning new skills or building a cool self-contained project from scratch. It’s rarer to find that fulfilment from working side by side with senior engineers and directly contributing to the design of a production application. I was incredibly surprised and thankful at the level of trust, responsibility, and support that was provided by my manager and team.
For future software engineering interns who are looking to start at Splunk and may find themselves with a similar opportunity, here are some thoughts on making more productive contributions and finding even more fulfilling experiences.
There has got to be a joke somewhere about the number of tech stacks it takes to screw in a lightbulb, right?
There’s a never-ending list in software development, so it’s likely that you’ll be working with some unfamiliar tools. One lesson that helped me onboard and contribute faster was recognizing the limitations of trying to learn everything before starting, mostly by reading documentation. The premise is that you don’t know what you don’t know, and this makes it hard to prioritize the mountains of information you read; much of it will be tangential to your current work.
Fun Fact: I went on a hike every weekend during my time in California!
This term, after a few days of technical workshops and reading some key documents related to my team’s product, I asked my manager for a small ticket to work on. This way, I was able to run into plenty of problems which helped me discover what I didn’t know. That allowed me to ask better questions to my coworkers and learn relevant knowledge such as team specific workflows, how to deploy new builds, and pieces of actionable information that weren’t immediately evident in documents. My takeaway was that embracing ambiguity and diving into some hands-on work (after all the basics are settled) helps you acquire and apply relevant knowledge sooner; it’s not timely or practical to gain total, in-depth context before committing to work.
Picture this. You’re given a feature that seems straightforward, so being the eager intern you are, you dive right into it and put up a pull request by the next day. Then, it turns out your co-worker envisioned another way to implement it and throws you a laundry list of requested changes; this makes you rework everything for the next few days. Throw in another reviewer’s conflicting interpretation of the task, some rebasing throughout all the delay, and any subsequent reviews, and that straightforward task just consumed the entire week.
One strategy I tried before working on a complex task, was to write down my interpretation of the problem, identify the one or two people who will review it, and discuss my high-level solution. If you’ve decomposed the task into subproblems and articulated various test cases, your co-workers will likely identify some helpful modifications and prompt some rethinking. But that upfront half an hour of coordination saves countless days of changes down the road. By making sure everyone is aligned on the scope and implementation, reviewers are more likely to simply request smaller and less complex changes, rather than a wholesale refactor of the strategy and design.
I find it interesting that one of the central lessons of interviewing is to always explain your thought process out loud when whiteboarding solutions to coding questions. Then we come into work, swap our markers for a keyboard, and more often than not, we expect our code to speak for itself (boilerplate doc comments notwithstanding).
Rather than treating communication as an after-thought, it’s important to actively consider how your code will be perceived by others. It means thinking about whether your class and variable names align with the mental model that your team uses to organize data and architectural patterns. It means determining whether a comment should be added to justify an opaque decision or if the entire approach requires a more intuitive refactor. It means predicting how a particular component may be used or extended in the future and encouraging a robust design that supports its continual evolution.
For example, one useful trick that helped me merge pull requests slightly faster was to review my own changes first, and proactively start threads where I anticipated review comments. An advantage of staying on top of the discussion is that it gives your reviewers context and saves them review iterations in asking expected questions. With some luck, you might just forestall the moment they ask you about that obscure line of code which fixed your pesky bug and took you hours to figure out.
A photo from the Splunk Products Org. Ski Trip to Lake Tahoe!
Every internship feels like an opportunity for fun, discovery, and exponential growth. Between a company ski trip to Lake Tahoe, numerous intern events, and all the fascinating fireside chats, there were plenty of lessons to go around.
Personally, the standout part of my internship with Splunk and its engineering culture, were the lessons I learned about the non-technical side of software development. Yes, it was helpful learning how to manage containers with Kubernetes, discovering how there’s an annotation for almost anything in Spring, and contributing to a super cool microservice. But it was also very insightful to see how mastering those tools were barely half the battle when it came to building high-quality software services. The commonality of all my advice is that it’s not particularly technical. And yet each one of them greatly contributes towards producing better and faster results. It’s not something that is evident from a personal project or a school course and for that, I’m grateful for the opportunity to work alongside such a fantastic team at Splunk.
Best of luck to every other student who is looking for internships or getting ready to start one!
— Mickey Dang
The Splunk platform removes the barriers between data and action, empowering observability, IT and security teams to ensure their organizations are secure, resilient and innovative.
Founded in 2003, Splunk is a global company — with over 7,500 employees, Splunkers have received over 1,020 patents to date and availability in 21 regions around the world — and offers an open, extensible data platform that supports shared data across any environment so that all teams in an organization can get end-to-end visibility, with context, for every interaction and business process. Build a strong data foundation with Splunk.