Daniel Ziegler

This interview is part of the “A Peek behind the Curtain” interview series.

Daniel Ziegler researched AI safety at OpenAI. He has since left to do AI safety research at Redwood Research. In this interview, Daniel and I discuss his career path, work motivation, and advice for getting into AI safety. 

Note: This interview is from 2020 and some parts may no longer accurately represent the guest's views. The transcript has been edited for clarity and readability.

——————————————————————————————

Lynette Bye: When you're deciding which things to do, are you explicitly sitting down and deciding how to allocate time or which things to work on, or is it more of a go with the flow, “does this feel productive right now?” process?

Daniel Ziegler: I think that overall I tend to operate in more of a go with the flow process and I'd probably benefit from being more systematic. I think there are some feedback loops in my one-on-ones with my boss where we discussed how I spend my time. I sometimes do some time tracking for myself to see where my time is going and make adjustments based on that, but overall, it feels more go with the flow or like a day-by-day process of planning what I’m going to focus on today.

Lynette Bye: How strategically have you chosen which big things to aim for in the past, whether by trying out things or asking people for advice, thinking about the big picture?

Daniel Ziegler: I feel the way this has gone for me is I've insisted on doing something that seemed plausibly the most important thing to be doing, and I bounced around a bit in what I was actually doing until that seemed like it was the case. I have certainly sought out people are for advice and also done some amount of explicit thinking about what seems most important.

Although some things also just felt overdetermined. Like once I bought into the basic AI safety stories, it seemed like, given my background, very likely the most important thing for me to work on. As far as seeking out advice goes, I think, I have got exceptionally lucky, I think in the people that I've just happened to be around, that made it easier for me to jump into the field.

I think I did ask people for advice some amount, but I never very strategically went down and was like, "Okay, let me see if I can find all the people doing relevant stuff and see how much I need their advice." I could go through some examples if that'd be useful.

Lynette Bye: Yes. I’d be interested in the example of a time when you had the bumps around for a bit before finding the most valuable thing to be doing at that time.

Daniel Ziegler: Yes, sure. I guess this is maybe the story of a couple of years before I was at OpenAI. In 2015, I went to the MIRI Summer fellows program. I already gave AI safety the initial stamp, but I bounced off of MIRI's research. I was like, "Okay, this isn't really working for me. Let me go back to what I was doing before." What I was doing before wasn't really that effective. I was just doing some formal verification research as an undergrad at MIT. I enjoyed it. It was interesting. 

I don't think I was fully convinced at that point either. Like, I don't really understand what MIRIs doing, so let's go do some other stuff for a while. When I more seriously thought about what I want to do with my career at the end of undergrad, then I decided to give it a real shot. I ended up applying to PhD programs. That went well. I started at Stanford, but like I mentioned, that didn't really work out, in the beginning. I took some time off from there. Then half a year later, I decided I would try to become a research engineer out of MIRI or OpenAI, and applied.

Lynette Bye: Did you try other things or need to experiment more? Basically, when we're looking in hindsight, and we mostly see the path that you ended up following, what were the winkles along the way, as you're trying to find that path because you don't know when you're starting out?

Daniel Ziegler: This is less about exploring, but for a while, I was pretty seriously considering just doing generic, or not generic but like direct impact software engineering jobs. Things like Wave, doing mobile money, which seemed like a pretty good thing. I think it would have been. I think in terms of personal fit, it might even have been better. But it would've been less impactful in expectation. As far as actually exploring goes, I think that being a research engineer definitely seemed like it would probably be a good fit.

Just the time that I spent preparing for the particular opening was valuable for just being like, "Okay, yes, this is something that I can be pretty good at." I spent a month and a half, with a housemate of mine, just reading a bunch of deep RL papers and replicating some of them. That was actually a really good time and went well. That was a pretty good approximation of, certainly the interviews that I did at OpenAI and also to some extent to the work.

Lynette Bye: How typical in experience do you think that was? Somebody had a decently solid CS background. Do you think that many people would be able to spend six weeks? If someone's spent six weeks and it wasn't at the point where they felt comfortable applying, how much of a signal should they take that, that they should try something else versus this was just some unusually fast, your background made it easier, et cetera?

Daniel Ziegler: I think this was unusually fast. I think six weeks would be very little for most people. I think a few things helped. One thing was that I was talking to the team and they told me the specific material that I should be trying to work through. I also had a roommate who was willing to pair with me on this and he happened to be unemployed at the time. He could just spend all day with me doing deep RL practice, which was obviously super helpful, both in terms of motivation and just getting through stuff faster and understanding the professor, et cetera.

There were some things about circumstance. I think probably realistically, I was just well-prepared, not specifically for ML, but I have been programming for quite a long time, since I was eight years old, I used to do programming contests. I had a lot of practice or a lot of demonstrated talent at just quickly solving difficult problems. Anyway, long story short, in general, I would expect people should try for significantly longer than that before concluding that it's not fit.

Lynette Bye: I'm curious what would be a good amount of time for someone to try it? Say they're going for OpenAI, and then if they're not able to do it, or they're not able to comfortably replicate papers, even maybe they're not getting to the interview stage yet, at what point would it be, "Maybe they should be thinking about other things that this is less likely to be their relative strength?"

Daniel Ziegler: I guess, I should say maybe after six weeks, you should probably be able to tell if it's something you're up for right now. Even if you don't think you'll be ready to apply in six weeks, which will often be unrealistic. I think, certainly even just starting out this process, you should be comfortable reading CS papers and at least be able to think about the math involved or at least be able to go and teach it to yourself. If you're not at that point, you probably shouldn't start replicating this process.

I guess you could try to take a week or something and test whether it feels like you can practically read the latest deep RL papers and understand what's up, and feel like you could implement them if you really tried. If you're not at that point, then more generic CS background is probably useful.

Lynette Bye: Do you have particular things that would be good to do if they wanted to see about doing this sometime in the future? There's very limited jobs at the typical AI safety orgs. Do they just go for anything that has AI in the name? What happens then?

Daniel Ziegler: That's a good question. I guess first I want to say that, yes, definitely it's a good idea. I made a mistake of feeling I needed to be working on some really impactful thing right away. I think that worked out okay, in my case. I got somewhat lucky in that regard, but it was crazy that I didn't, like, apply for Google AI residency. I should have been totally happy to take at least a year or two and just try to find the place where I could skill up as well as possible. 

I would say, it depends where you're starting, but it definitely makes sense to get good at programming. It definitely makes sense to get comfortable reading CS papers. It definitely makes sense to apply to programs like Google AI residency, Facebook AI residency, OpenAI, or scholars or fellows. I also think that, "Yes, it makes sense to just work as an ML engineer or an RL research scientist. I guess when I say, ML engineer, it's probably a good idea to try to do something research-oriented. Try to get some research lab, like Google Brain, just so you're familiar with that ML engineering.

I don't know what it's like to be a very applied ML engineer, that's not a research engineer just doing generic ML work at a company. I think that could be okay. The more research the thing you do is, the better.

Lynette Bye: This one, I think is a bit of a tricky question, but I have some people who would like to do stuff in AI, but they're currently in some position where it's hard for them just to get a tech job to begin with and whatever they're doing. At that point, do you think they should just be focusing on getting a stable position and then think about reading up and looking into this after they've had a while to build up skills?

Daniel Ziegler: Most likely. For 90% of people, that's probably the case. I do think there might be like that 10% or, I don't know exactly how much, a small fraction of people that are like happy to just do conceptual alignment work or other kinds of somewhat unconventional AI safety work that doesn't require as much programming skill or ability to get just general ability checked off.

Lynette Bye: How would people check if they were going to fit for some of this conceptual work?

Daniel Ziegler: I would say hang out on the alignment forum or LessWrong and read posts and write comments. Write your own posts and see what the reception is. They'll uncover their abilities there. It is a narrow community with particular tastes, but that probably is the community if you want to do this kind of work at the moment.

Lynette Bye: Cool. Are there good orgs for building up skill or direct work other than the ones that are like EA buzzwords, like CHI, FHI, OpenAI, etc.?

Daniel Ziegler: I would say any industry AI research lab should be good. I mentioned Google Brain, a bunch of companies have starters like Baidu, Salesforce. Microsoft also has its own AI research stuff going on. That's definitely good. I don't know. 

Lynette Bye: Yes. If they’re going to random industry labs, how worried should they be about developing AI stuff that's potentially harmful?

Daniel Ziegler: I would give it some thought. My general philosophy is that in the giant pool of existing AI researchers, a little bit of extra work is usually just going to be a drop in the bucket, and it's like more important to just be able to skill up and potentially also be part of important work that's being done and maybe get to steer it a little bit that's also a good thing, but for the most part, it's just doesn't really matter so much.

Lynette Bye: What about government? Do you know of government programs that are doing good AI research?

Daniel Ziegler: This is definitely not my wheelhouse. I don't think I've heard about any US government labs that are currently doing very exciting AI research. I think there are maybe like for example, DARPA funded initiatives, which aren't being run by the government but are done in pretty close collaborations, but I don't know so much.

Lynette Bye: Cool. It sounds like for you, external validation was a really important part of getting the confidence to keep going. If people are not in a position to get that right away, what other signals might they look for? It sounds like being ready to implement papers is a good one to say, "Okay, you should be thinking about AI stuff." What else?

Daniel Ziegler: There is the standard credentials or standard ability to get relevant jobs, or get accepted out of all of the programs. I think you can also try some small projects and write a blog post about it and post it somewhere or email to me or whatever.

Lynette Bye: Would this be a project trying to create their own network trainer or what kind of things might count as good small projects for a relative beginner?

Daniel Ziegler: Yes, that's a good question. If you're not in a point where you feel you can replicate cutting edge research papers, you could still try to do some ML project, but maybe based on some more well established, recent ML tutorial, implement some standard technique. What I would probably want to see is that A, some interesting extension to it, or just interesting application of something new and then, I'd want it to be well-executed. The things you conclude should be well justified. Also an important signal is how long does it take? I think if it takes months and months to do a simple project, that's not a good sign.

Lynette Bye: We seems to have a question of how much is the person, something like a go-getter, able to create a project, and do it, even though they don't really know stuff, they're figuring out on the fly. How important is that in the field right now?

Daniel Ziegler: I think it is very helpful when you're doing research work. You don't need to be able to do cutting edge work, no need to be able to be at the frontier of the field or something just on your own. I definitely never felt like I was in that position, but I was able to be like, "Okay, here's the relatively clear set of things for me to try to do" and then just figure out how to go and do them once like some of the parameters have been set. I do think that was important. 

If you're doing, no matter what kind of AI safety research you're doing, even if you're in a team and other people are driving the vision and most of the high-level stuff, then you still need to be making lots of small decisions. You still need to be figuring out how to debug annoying problems, where there's no standard rule book for how to figure out the issue. You need to figure out how to get yourself skilled up on new things that come up all the time.

Lynette Bye: If someone is not depressed, but having a really hard time getting themselves to do any kind of independent project that's probably also a bad sign of their ability to do really good work here?

Daniel Ziegler: That's my guess. I don't want to say that very confidently.

Lynette Bye: Sure. Okay. When should someone be thinking about getting a Master's or PhD rather than directly trying to work at an org?

Daniel Ziegler: That's a great question. I would say, PhDs and the right kinds of master’s programs are still some of the best places to learn how to do conventional academic research, if only because you're forced to. If it seems like you would be good at and you're interested in it, I do think that can make a lot of sense. If you're doing something like a PhD, that's maybe a five or six-year commitment, certainly, I would hope you're spending a good chunk of that time already doing valuable research, but usually, that should be possible. 

Lynette Bye: For what things is traditional academic research what we need?

Daniel Ziegler: I think it's pretty close to the work that produces most of the conceptual AI safety progress. There are some skills that are somewhat less important like writing papers and getting them accepted at conferences. Yes, there will always be plenty of things that aren't relevant specifically but just the general research workflow is good to be familiar with.

Lynette Bye: What are the biggest gaps right now, either in roles or skills that people have that you think the AI community needs?

Daniel Ziegler: From my perspective, at OpenAI, I think one of the biggest gaps is actually management and mentorship capacity, if we're thinking about hiring for our team, which we are going to be doing again very soon. What it feels like right now is that people need to already be at a very high level before we can accept them because we don't have that much capacity to mentor them, and we don't have that much management capacity.

Lynette Bye: For someone who has management capacity, but they also need to have the technical capacity to be able to do the work themselves at a high enough level that they wouldn't need a manager, so that they can be manageable for those, I'm guessing,

Daniel Ziegler: Probably yes. It seems maybe possible to do without but is hard.

Lynette Bye: Cool. If somebody had the skill or they could apply right away but they had a chance to get some good management experience first, that could be a valuable thing.

Daniel Ziegler: Definitely.

Lynette Bye: Jumping topics a bit. I'm curious how you personally relate to your work. Do you often feel like you're forcing yourself to work?

Daniel Ziegler: Unfortunately, yes. Especially, I think during the lockdown, but even before. I still don't love ML engineering. It is interesting, but some of it is still a little bit gross. I think part of that is just I'm a perfectionist when it comes to technical things. I like making perfect, logically coherent artifacts. ML is just not like that, it's just always going to be a bit messy and very empirical. You certainly can have a lot of intuitions about what's going on but ultimately, you have to try things and then your program doesn't work.

The workflow is often a pain where you're on something, half an hour later or even longer afterwards you realize it didn't turn as well as you wanted it to, the app looks bad. You have to figure out why and it could just be a bug in your code. It could be that your neural network wasn't big enough, it could be that your data was not quality enough. It could be a bunch of things. 

Yes, I really am saying two things here. There's the messiness and the workflow, where the workflow is really bad for getting into flow because your thing breaks after it's been running for a while and you have to go back to it and fix, think about it for a while, fix something, launch a new job, and then you have to switch again because you started a job.

Lynette Bye: More broadly, including workflow and other things, does it feel like there's a trade-off between doing work and being happy?

Daniel Ziegler: To some extent. I don't like to force myself to work to the point where it really makes me unhappy, I think it just wouldn't end up being that effective of work either. But if I was just trying to be happiest and didn't care about the world, I would not be doing this kind of work, and I would also not be working that much. Maybe I would just take some time off and do random hobby projects for a while and find some maximally fun, interesting tech job. Although, people that try to do this burn out after a few years, so unclear.

Lynette Bye: Do you ever feel guilty about not working more or not having more impact?

Daniel Ziegler: Yes, definitely. This is definitely a big part of my day-to-day experience. I still feel like I'm figuring out how to deal with this the right way.

Lynette Bye: Can you tell me what that's like?

Daniel Ziegler: I don't know. I think that it's often a thing where I have some picture of how quickly I could get a thing done and how much actual focus hours I could get into the day, and then any number of things can come up, and then I feel like I failed. There's almost always some story for like what I could have done better. That is frustrating. The problem is that it's more demoralizing than it should be. Sometimes I find myself working less effectively because I feel like I haven't been doing well on something. That's not a very helpful response. I think I would much rather set more realistic goals for myself. 

Lynette Bye: Is there anything you do for self-care, for helping either with guilty feelings or more broadly? 

Daniel Ziegler: Yes. I think exercise is really important, although I haven’t done it lately because there is poison. I do find that when I'm stressed out, like just going for a half-hour run or something that really helps me a lot.

Lynette Bye: About how many hours do you tend to work in a day?

Daniel Ziegler: That depends how you define work. I think hours that I'm sitting in front of the computer or just occasionally scribbling on a notebook or doing work-related things is I think eight or nine hours a day during the week. Every now and then a couple of hours on the weekend. As far as very focused work, anywhere between zero and four hours in a day.

Lynette Bye: There's a debate about whether or not people will get diminishing returns as they work more hours. Do you notice that either for general work or for focused hours?

Daniel Ziegler: Yes, it definitely seems like this. I think at some point, I just get tired. I think that if I'm really engaged, then I can keep working for pretty long. I have had days where I actually felt I got in eight plus quite focused hours, but that's very rare.

Lynette Bye: Sure. Do you have any sense of when or how much you start to get less return for putting in an extra hour?

Daniel Ziegler: Yes. It does depend on the work. I think the first four hours in a day are more valuable than those after it. 

Lynette Bye: Jumping a bit. Looking back, if you could send a list back in time to your freshman college self, what kinds of advice would you give? Pretend this is generic, it's not like don't date X or D, who knows. What would you recommend that they learn or do? How could they make a better use of the college time?

Daniel Ziegler: I think one thing I really should have done is, first of all, been a lot more willing to question my default plan. My default at the time wasn't even effective altruism. It was just: become a software engineer, a good software engineer working on some reasonably interesting problem somewhere. Freshman year of college, unsurprisingly, I discovered a bunch of things and my plans changed. I think I should have just been much more aware of how likely that was and focused more on exploring rather than just exploiting the things I was already good at. Instead of taking a bunch of classes of things I was already pretty good at, I should have branched out into other subfields of CS. I should have thought about other possible career paths more seriously. 

This is a little bit unclear but when I was choosing my friends and my living groups, I think I focused a lot on some aspects of culture that seemed important to me and to some extent on finding people that would help me grow in ways that I felt I wanted to grow in but I don't think that really worked out that well. I hesitantly wish I had instead doubled down on finding really strong intellectual peers and made that more of a priority. The other stuff's important too but I neglected that a little bit and could have had a stronger set of intellectual peers around me in college if I had optimized for it.

Lynette Bye: Any ones that you would have advised them just to go about college differently, counter-intuitive mindsets, or anything like that?

Daniel Ziegler: Another mistake I made was just focusing too much on classes. It's just such a salient thing to optimize for, but it didn't really matter that I had a good GPA, and most of the classes I took weren't really that important. Especially given that my plans changed but even so that would be true. Just spending more time doing other things would be good.

Lynette Bye: Like what?

Daniel Ziegler: I don't know. Debating with people about AI safety or just generically what the most important levers are in the world and what the most relevant career paths or just skills might be. 

Lynette Bye: Would you have tried doing anything more ambitious than undergrads usually attempt?

Daniel Ziegler: I think I was already pretty high in this. I guess I stumbled into a relatively high degree of success. I don't know if it counts as ambitious but I did start doing pretty serious undergraduate research like sophomore year and happen to be in a project that won best paper a year later. That was exceptionally lucky. I also did try to run the MIT Effective Altruism Club. Although I really think I was not a terribly good personal fit for that and I actually regret that.

Lynette Bye: Going a little bit deeper here, I'm curious what are some of the biggest struggles you've had and getting to where you are or that you've had to deal with and how do you respond to those? Want to share that story?

Daniel Ziegler: Yes. I think the time at the end of college when I was trying to figure out what to do, through the time when I got hired at OpenAI-- Honestly, it was a pretty rough period of my life. AI safety seemed this probably really important thing although I didn't fully understand the arguments at the time. I think one thing that was particularly rough is that I didn't really feel I had people around me to even discuss these things that well with. I didn't manage to create the kind of community that I wanted with MIT EA.

I also think I didn't take advantage of some resources that I did have, like I definitely knew people in the space and I knew EAs and I certainly could have reached out more and built more friendships that gave me some more confidence in my thinking and my plans and improved and refined those. I spent a long time trying to decide whether I even wanted to accept my PhD offers because I didn't really feel like I was prepared to do a PhD.

In hindsight, I think I was, as long as my expectations were a little bit lower. I think I should have expected to spend quite a bit of time just skilling up in ML and getting started on some research project even if it wasn’t AI safety-relevant but at the time it felt like this was a thing that seemed very daunting to me and people were telling me, you should totally be qualified for this. It seems very reasonable so I went for it, but it did seem pretty daunting. 

I'd take a grand total of like one ML class maybe and plus a special January class and didn't really feel I was prepared. I didn’t really know a lot of people, got this random medical false alarm where 23andMe told me that I had hypertrophy cardiac. Well, I downloaded 23andMe's raw data and then uploaded them to a different service. Not very valid or anything, I was definitely silly to take it as seriously as I did, but that coincided with me starting at Stanford and I was stressed out that I might have this chronic heart condition. All that just meant I was very anxious and not sure how this was going to work out and I just basically just bailed out and took medical leave. 

Another part that was difficult was deciding between MIRI and OpenAI because well, MIRI's programmer interview process is long and even after that process, I still didn't have a great handle on what MIRI was trying to do, or what the reasoning behind their research was. I was hoping that I would get a better sense and could pass their ideological turning test and then know whether to take them or not, but instead I just had to be like, well, it's been a few months, still don't really get it, I think I should just do the thing that I feel like I understand and that makes sense to me. 

Lynette Bye: How did you get from anxious PhD dropout to spending six weeks really intensive studying for interviews? That seems like it takes some fortitude.

Daniel Ziegler: Yes, I don't know. I did spend a few months just not doing much at all and just hanging out with people and I don't know, taking care of a three-year-old and starting this group house and all the work involved with that and also just doing random fun programming projects. I just didn't worry about it for a little while and then after three months of that, I started to think about it again. I don't know, part of me was still pretty anxious even as I started that, but once I got rolling, it just went very well. It was a great experience spending 10 hours a day with my roommate learning a bunch, reading a bunch. 

Lynette Bye: Were there any skills you learned to help you manage anxiety?

Daniel Ziegler: I don't know. I don't know if this is a skill, but after having gone through the most intense periods of anxiety when I was dropping out of Stanford and coming out of that and realizing things were totally still fine, that made it a lot easier for me to not take the anxiety quite as seriously and be like, “This is going to suffer a little while, but it's going to be okay.” I don't know. I built up a bit of a meditation habit, which seemed to help a bit. Exercising also helps. I don't think I really had a great solution though other than getting to the point again where I felt like I had a plan for my life.

Lynette Bye: Are there things that people don't say out loud in your field?

Daniel Ziegler: One thing that's a little bit awkward is that there's a bunch of people pursuing different research agendas. I think some people are like, “Yes, all this work seems great,” but I think a lot of people working on one agenda or another just look at the other and are like, “This seems nearly useless within AI safety.” That's kind of weird. People have spent quite a few hours trying to debate some of these things and it's often not that productive surprisingly or takes a long time to get to the bottom of the disagreements.

Lynette Bye: I'm curious, for people coming into the field, I think they frequently see it as this monolith field. You just go and work somewhere, but I feel like understanding these research agendas is a pretty important prerequisite for that. How would you suggest someone tries to understand this for themselves?

Daniel Ziegler: Ideally look at output from some of these teams. I spent a lot of time reading Paul's blog posts and all the stuff that had come out of that team. I've also spent some amount of time reading papers. I think it would also be useful to look at some of the published work from CHAI and DeepMind Technical AGI Safety Team and FHI and just try to point where you understand roughly what people are working on and why they're doing it and get a sense for what makes the most sense to you.

Lynette Bye: I've recommended in the past that people try and write up a document of their understanding of the different research agendas so that ideally they can get feedback on that or at least just formalize their own thoughts of what's going on in this very chaotic space.

Daniel Ziegler: Yes, that sounds like a great plan. 

Lynette Bye: Jumping a bit more into productivity stuff, what are the biggest things you do to help yourself be productive? What are the things you would rave about?

Daniel Ziegler: As mentioned, I don't think this is something I've figured out perfectly, but one thing that's really helpful for me is just moment to moment or small-scale planning. At the beginning of the day deciding what I wanted to do that day and when I'm starting a task, write down a list of the things that I would probably want to do, or even just think about it. Just having that be clear helps a lot. I tend to not work with to-do lists, most of these things are pretty ephemeral. I just regenerate them at the beginning of the day or generate a new list when I'm facing some decisions about what to work on next. I think that helps me a lot, helps me not rabbit hole on things that are less important and helps me to get a sense, helps me to do time estimation, gives me a sense for how much is left on a particular task.

Lynette Bye: We have a little bit of time left. I'd be curious to tell you a little bit about some of the theories that generated some of these questions. To see how much just resonates with your experience. The one I'm working on right now is about iterated career exploration goal setting, where the idea is, we're trying to find out how we can have an impact. We usually can't envision the exact path we'll take. How do we do this winding path there and have a greater chance of ending up somewhere useful? I have an idea that the more you're explicitly testing something, like lean methodology style, as you go, the more you can be setting a little bit better goals each time. I'm curious if this matches either your career exploration or your research experience. 

Daniel Ziegler: Yes, I feel kind of. I think often either you are constrained or you feel constrained in how freely you're able to just try different things. Like I, for example, in half a year before I was graduating as I was applying to PhD programs, I was like, “It would be really darn useful to go try to get some ML research experience now. I should like try to talk to some professors doing that at MIT and see if I can just start a little project.” I knew that this would have been nice, but I didn't feel like I had the time to do it. I had a thesis to write and some more classes to take. I felt like my research project and my thesis was behind. I really didn't feel empowered to do that. In hindsight I totally should have done it anyway. I think my thesis was phenomenally unimportant and would have been fine. Even if my supervisors were a little unhappy with me, but it's not that they were that happy with the thesis in the end anyway. I don't think it really matters. Anyway, I think did have the freedom there, but I didn't realize it. 

I think research can be similar because often the most important progress comes in the form of thinking about things differently or trying to do a slightly different thing, but it's just even having the idea to do that. That is difficult. Sometimes you got there by just bashing your head against one way of doing things for a while and that's the way that you see. Then eventually you might come up with a different idea. I don't know. There's probably ways to be more explicit about generating new ideas, but anyway, I'll also say I do think there is a lot of value in testing a lot of different things. I think it's often difficult to do in practice.

Lynette Bye: Another of the ideas is how does one's personal success interact with the people you are around either working day to day, getting mentoring from, or asking for advice. It seems to be an open question that different people utilize different amounts. Sounds like you have received some beneficial validation in the training over time.

Daniel Ziegler: Definitely. I think I also received good ideas about directions to go to. I think this starts as early as when I was 12 and I was starting to do some programming for my dad's web startup. One of his employees suggested I do algorithm contest turning problems. Getting started with that meant that like a year and a half later, I actually managed to qualify for the a national training camp in the US for high school and below. I met a bunch of people. I met Jacob Steinhardt there in 2008 and met a bunch of really smart people. I don't know how much I got out of that. Jacob invited me to spend a week with him at Dropbox in summer of 2012 and I did some simple ML stuff, but I don't know. It didn't really work that well at the time. This is also like pretty deep learning. It seemed like maybe that was a missed opportunity. He also invited me to the first iteration of Spark, and I didn't go because I thought I should train more for the IOI. I actually really regret not going. I think it would have been a great way to, or there's some chance it could have gotten me to skip a bunch of time, or at least get to know about other good people. Hard to say. Anyway, sorry I started rambling. What was the original question? [chuckles]

Lynette Bye: Something like how important is it to try and surround with great people. Like Richard Hamming had this explicit theory. I think he called the open-door policy where people who just isolate themselves and work peter out of ideas because they're not interacting with other people and it's not generating new ideas. Curious if something like that is actually true.

Daniel Ziegler: Right. That definitely seems true. I think that it's probably good to separate career path stuff from research ideas.

Lynette Bye: Yes, that's fair.

Daniel Ziegler: I think I've definitely benefited from having people to follow or at least people to try to join in their projects. I've benefited from that a lot. As far as research ideas goes, this definitely feels true to me. I think I haven't felt like I've done enough researchy work for it to play out that much for me. It does seem really important to have people that bounce ideas off.

Lynette Bye: Yes.

Daniel Ziegler: I don't know. Not sure. Some people also seem to just go off on their own, do a bunch of really good stuff. I don't know how that would work.

Lynette Bye: Yes, I do have a question of how much spending regular time just thinking about the big picture, the big questions. What you don't know the final sense of your field that thing that seems like it's probably important to being able to generate new insights.

Daniel Ziegler: Yes.

Lynette Bye: I'm still testing out whether that's true.

Daniel Ziegler: Right. I think maybe for some people that requires talking to other people and for some people it's not as important, not sure.

Lynette Bye:. It sounds like you've tried to do this-

Daniel Ziegler: Little bit.

Lynette Bye: -when you're trying to explore the research that maybe you might want to do in the future. When you've done it, what do you think would have been useful or went wrong that it wasn't as useful as you were hoping?

Daniel Ziegler: I don't know. It would help if I was smarter. [chuckles]. That's a very flippant answer. I think part of it is conviction in my own ideas. I think that like, yes, just as a general personality trait could use some more conviction in my own thought and that would make my own good ideas more salient to me and easier to capture and build on.

Lynette Bye: How much have you tried externalizing these thoughts like keeping a running doc of explaining your ideas, the evidence for something like that.

Daniel Ziegler: Not very much. I wonder if it's a chicken and egg thing where like, it doesn't feel like I have that many things that are recording such a doc right now, but maybe it doesn't feel like it would be useful at the moment, but maybe if I got the loop in the cycle of rolling, it would work better.

Lynette Bye: Yes, I suspect you need to have an idea that you wanted to keep coming back to, but then-

Daniel Ziegler: Yes.

Lynette Bye: Well, if you ever try it, I'd be interested in hearing how it goes.

Daniel Ziegler: Might be a little exercise.

Lynette Bye: Yes, so one of my other ones is that happiness and being really successful will be correlated and probably specifically enjoying the work.

Daniel Ziegler: Right.

Lynette Bye: This one is a fairly open question. I hear very contradictory opinions and I want to learn more about this because it seems like a lot of people feel guilty about not doing more.

Daniel Ziegler: Right.

Lynette Bye: The people who enjoy the work intrinsically seems like they just have an easier time doing it.

Daniel Ziegler: I tend to think that is really excellent. You probably should be doing work that you really enjoy. Right now I feel like I'm not enjoying being a research engineer as much as I’d like, and that definitely makes me worse at it. I'm in this weird position where that does concern me pretty significantly, but also it maybe feels outweighed by the urgency of what we're doing and the lack of people to replace me, I guess. I'm not sure, maybe I'm making a mistake.

Lynette Bye: What's the mechanism there, the reason you think happiness is important?

Daniel Ziegler: Well, I think it's probably that happiness is important and partly that happiness is a signal. If you want to do great intellectual work, you want to be engrossed with something to the point where it is often occupying your shower thoughts or your-wake-up-at-5:00-AM thoughts. You want to be obsessed with it to the point where you're not just solving the next task, but you're also without too much effort developing your ideas for the bigger picture or the vision for what could be and I don't know, you want to be. Also I think it just means that you can spend a lot more time productively working on something if you're just curious about it or driven by it or whatever.

Lynette Bye: I also hear terms like good judgment or “thinking well” thrown around in the community. It seems similar to what you're talking here about trying to generate the big ideas. Some people use it to think about things in a way that gets them started on the right career path at all. It comes up a lot on things like research or organizational decision making. I'm curious if you have thoughts on what this skill is and how to develop it.

Daniel Ziegler: Yes, it's interesting. I feel like in organizational decision making, not that I've tried to run that many things, but I feel like good judgment seems useful, but it seems like also you get a lot of mileage out of just having seen a lot of functional organizations in your domain or having the right set of heuristics either by default or conveyed to you. Anyway, that aside, I do think there is something like good judgment or thinking, well, that's quite important for doing good work. 

It still feels a little bit under-defined to me. It seems like there's maybe a few things that can be mentioned here. To gesture at what I'm thinking, part of it's just logical precision and reliably being able to tell when an argument is logically sound. 

Part of it's being pretty well calibrated. Even just in a calibration training sense, this feels related. Not that I literally make, or that most of these kinds of applications, literally involves a bunch of probability estimates all the time. I think the mental motions that you go through to evaluate ideas or plans in a non-biased way feel somewhat similar to trying to be calibrated about a prediction. 

Yes, I don't know. I guess an addendum to the evaluating logical arguments thing is it feels like it's useful to be able to write mathematical proofs or at least if you can like write rigorous proofs that actually hold up or have a sense for when you've proved something that's accurate, then that's a good sign.

Lynette Bye: Okay. Jumping back, do you think there's things that EA frequently gets wrong about AI broadly? The movement or general populace?

Daniel Ziegler: Yes. I think one is something we touched on briefly, which is being worried about contributing to harmful AI progress. I think this is just usually not a thing you should be worried about personally. Not only is it the case that by default your contribution is not going to be that large relative to the rest of the field, but well, there's some reasons. Certainly, the default position should be that usually technological progress is good. Anyway, we have some arguments to believe that that might not apply in this case, but I don't know.

You might think that like reducing the overhang of how much our AI algorithms can do relative to the hardware that's available is good to do now. That progressively continues in the future. We should feel good about making as much progress as we can now with the current hardware. I'm not sure I totally buy that, but the other thing is just that it's really very valuable to try to do real, practical work with real AI systems. To try to align them and make them do more of what you want. I think we gain a lot from doing that

Lynette Bye: I'd be happy to think that we're not going to kill ourselves by accident too soon.

Daniel Ziegler: To be clear, the world as a whole might but EAs trying to scale up or do AI research probably aren't going to be that big. 

Daniel Ziegler: A random other thing which occurred to me about people trying to get into AI Safety. The way it feels like right now, people are often confused about replaceability when it comes to doing like ML research or research engineering jobs at places like OpenAI around safety. Where they both have the sense that a bunch of people are trying to get into it and therefore, they should be pretty replaceable, but also there’s people like me saying that we definitely need more very strong talent, and it doesn't feel like we're finding that much of it. 

I think that this comes back to what I was saying about management mentorship capacity. Given the current state of our management mentorship capacity, the curve of how useful you are as a function of your skill on the kinds of things we do is zero for a very long time, then it rises relatively sharply and goes up to pretty high levels.

It means that if you are very strong, your replaceability is very low and you will be able to just be an extra person on a team and make it better, but if you're like borderline, then you will contribute but also suck up some amount of coordination capacity and we just sadly don't currently have the space. I think there is a world where we'd have space for a lot more people if we were better at things.

Lynette Bye: Right now it sounds like the conclusion of that is that if you can pass the self-test thresholds of “Can you replicate papers and stuff,” then you should definitely apply because if you get through the interview process and you're hired, you'll be very valuable. It's a relatively small investment in the long-term.

Daniel Ziegler: Right. Then I would say even if you don't pass that, that doesn't mean that you can't be useful for AI Safety in the slightly longer term. If you spend your time scaling up and building career capital, the field will grow, the capacity to absorb more people will grow, and the set of more well-defined tasks will grow.

Lynette Bye: Cool. Thank you very much. 

——————————————————————————————

Enjoying the interview? Subscribe to Lynette’s newsletter to get more posts delivered to you.