Semiconductor devices / chips are foundational to all future technology powering transcending innovation, investments, economic development, and societal benefits. Professor Philip Wong’s creative passion continues to drive this work globally with decades of inspiring inventive leadership.

At Stanford, Wong’s research aims to translate discoveries in science into practical technologies. His present research covers a broad range of topics including carbon electronics, 2D layered materials, wireless implantable biosensors, device modeling, brain-inspired computing, non-volatile memory, monolithic 3D integration, and more.

He graduated 48 PhD students, among them 24 are women and minorities. He is a champion of diversity even before it became an imperative.

Wong’s noteworthy contributions include advanced semiconductor device concepts and their implementation in semiconductor technology in the modern era. His research has contributed to significant advancements of silicon CMOS scaling, carbon electronics, and non-volatile memory. These profound achievements span industrial research and development, and academic career.

Professor Wong’s work very much aligns with the recently announced USA CHIPS and Science Act. In the interview, we discuss the implications of The Act. The Act, “Bolsters U.S. leadership in semiconductors. The CHIPS and Science Act provides $52.7 billion for American semiconductor research, development, manufacturing, and workforce development. This includes $39 billion in manufacturing incentives, including $2 billion for the legacy chips used in automobiles and defense systems, $13.2 billion in R&D and workforce development,and $500 million to provide for international information communications technology security and semiconductor supply chain activities. It also provides a 25 percent investment tax credit for capital expenses for manufacturing of semiconductors and related equipment. These incentives will secure domestic supply, create tens of thousands of good-paying, union construction jobs and thousands more high-skilled manufacturing jobs, and catalyze hundreds of billions more in private investment.”

Professor Wong’s work very much supports Stanford’s No. 1 ranking in Pitchbook’s, top 100 colleges ranked by startup founders. “PitchBook’s annual university rankings compare schools by tallying up the number of alumni entrepreneurs who have founded venture capital-backed companies. The undergraduate and graduate rankings are powered by PitchBook data and are based on an analysis of more than 144,000 VC-backed founders. Stanford-educated founders top both the undergraduate and graduate lists. UC Berkeley took the No. 2 spot for undergraduate programs, and Harvard was second among MBA and other graduate programs. MIT ranks in the top four for undergraduate and graduate school rankings, despite having a total enrollment of just 11,934 in 2021.” US colleges make up 18 of the top 20 spots, with the other two held by Tel Aviv University (7th) and Technion – Israel Institute of Technology (15th).

Professor Philip Wong is the recipient of the 2023 IEEE Andrew S. Grove Award—the highest award recognizing outstanding contributions to solid-state devices and technology. The award criteria covers: field leadership, contribution, originality, breadth, inventive value, publications, other achievements; society activities, honors, duration, and the quality of the nomination.

Professor H.-S. Philip Wong’s remarkably compelling research, global societal impact, deep insights, lessons, innovations, and exciting narratives of discovery are explored in two extensive interviews which are unscripted with links provided and summary portions extracted below.

The IEEE, Institute of Electrical and Electronic Engineers, its roots dating back to 1884, and with more than 420,000 members in 160-plus countries, is the world’s largest technical professional organization dedicated to advancing technology for the benefit of humanity. Professor Wong embodies all of the excellence in this iconic organization.

This article is based upon insights from my daily pro bono work, across more than 100 global projects and communities. As of November 2022, with more than 1 million CEOs, investors, scientists, and notable experts.

Professor Wong’s Brief Profile

Professor Wong’s profile is so extensive that a summary is provided with the non-profit IEEE TEMS (see interview series – Stephen Ibaraki – “Transformational Leadership and Innovation…”). These two direct links to Part 1 and Part 2 contains the profile and two video interviews.

H.-S. Philip Wong is the Willard R. and Inez Kerr Bell Professor in the School of Engineering at Stanford University. He joined Stanford University as Professor of Electrical Engineering in September, 2004. From 1988 to 2004, he was with the IBM T.J. Watson Research Center, where he did many of the early research works that have led to product technologies.

While on leave from Stanford from 2018 to 2020, Wong was the Vice President of Corporate Research at TSMC, the largest semiconductor foundry in the world. Since 2020, he remains the Chief Scientist of TSMC in a consulting, advisory role. At TSMC, as VP of Corporate Research, he built up a Corporate Research organization in Taiwan and in San Jose, CA, that aims to establish forward-looking technology leadership for TSMC. As Chief Scientist, Wong formulates research directions for TSMC and advises TSMC on matters pertaining to long-term strategy and vision.

To advance silicon CMOS scaling, he pioneered the device concept of using channel geometry and multiple gate electrodes to control short channel effects and enable transistor scaling to nanometer scale for 3-nm node and beyond. His work elucidated the design principles and demonstrated the first nanosheet transistor, thus pointing the way toward continued device scaling beyond what was considered possible with a conventional bulk silicon transistor. This device concept is the basis of modern transistors used in high volume manufacturing such as the FinFET and the nanosheet transistor.

Beyond the realm of CMOS, Wong is best known for his work on carbon nanotube (CNT) electronics. From the late ‘90s through today, his persistent innovations from materials, devices, circuits, to integrated systems, have transformed the carbon nanotube as a model system for studies of low-dimensional physics into an emerging product technology that is also embraced by the world’s largest semiconductor foundry. A rich body of research has pushed the envelope of all three axes of performance, scalability, and functional complexity. He co-authored the first device textbook on CNTs towards educating current and future device engineers on carbon nanotechnology. His highly-cited open-source CNT SPICE model has enabled many research around the world. Through the DARPA ERI 3DSoC project (US$ 61M), he and his collaborators translated resistive switching random access memory (RRAM) and carbon nanotube transistor technologies to a commercial foundry.

Wong is an early proponent of phase change memory and metal oxide resistive switching memory RRAM. His students developed the open-source RRAM SPICE model that is used extensively by academia and industry (> 6,000 downloads on nanohub.org). His research group realized the first neuromorphic electronic synapse device based on phase-change memory, a foundational work that is routinely cited as the first work in the rapidly growing field of brain-inspired computing. He pioneered using RRAM for neuromorphic computing, with works ranging from device characterization to neural network system demonstrations. His work on non-volatile memory modeling and phenomenological understanding, scaling, and neural network system demonstration has inspired significant academic and industrial activities and investments. RRAM is now a product technology from several large commercial foundries.

For over 25 years, Wong has taken a leadership role in key conferences in the field of microelectronics: General Chair of the IEDM, sub-committee chair of the ISSCC, IEEE Executive Committee Chair of the Symposia on VLSI Technology and Circuits. He was editor-in-chief of the IEEE Transactions on Nanotechnology.

He has held leadership positions at major multi-university research centers of the National Science Foundation and the Semiconductor Research Corporation. He is the founding faculty co-director of the Stanford SystemX Alliance – an industrial affiliate program with 38 member companies to collaborate on research focused on building systems. He is the Director of the Stanford Nanofabrication Facility – a shared facility for device fabrication on the Stanford campus that serves academic, industrial, and governmental researchers across the U.S. and around the globe, sponsored in part by the National Science Foundation.

He is a Fellow of the IEEE and received the IEEE Electron Devices Society J.J. Ebers Award for “pioneering contributions to the scaling of silicon devices and technology.” This is the IEEE Electron Devices Society’s highest honor to “recognize outstanding technical contributions to the field of electron devices that have made a lasting impact.”

Wong received his B.Sc. (Hons), MS, PhD degrees from the University of Hong Kong, State University of New York at Stony Brook, and Lehigh University, respectively. He received an Honorary Doctorate from the Institut Polytechnique de Grenoble, France, and is an Honorary University Fellow of the University of Hong Kong.

Excerpts From Two Interviews with Professor Wong

AI is employed to generate the transcripts producing 40 pages from more than 2 hours of interviews. Due to length, technical complexity, the interview is presented only as excerpts, edited extensively for clarity, and in summarized form with a focus on key themes. The cadence of the chat is kept and Philip’s personal voice often maintained by still using his words like “I” and “our”/”we” when referencing Philip’s work and/or his work combined with Philip’s colleagues / students.

Philip spotlights his mentors and then his colleagues and students and their interactions throughout — view the videos to get these details.

In addition, AI has an approximate 80% accuracy so going to the full engaging video interviews is recommended for full precision. Time stamps are provided with the caveat that they are approximate.

The interviews are recommended for all audiences from students to global leaders in government, industry, investments, NGOs, United Nations, scientific and technical organizations, academia, education, media, translational research and development, interdisciplinary and multidisciplinary work and much more.

Part 1 Outline

Philip’s turning points in his career. 0:00

Translational research is where it’s at from an industry standpoint, from a practical standpoint. 8:52

Working on top-narratives in carbon electronics. 13:11

What’s next for carbon nanotube technology? 17:00

Why do we need a copper barrier for silicon transistors? 21:48

Monolithic 3D integration and the advantages of low temperature. 28:28

How do you make use of the good attributes of all these new memory technologies beyond the traditional ones? 33:01

How can we improve or lower the energy barrier or access for students to touch hardware? 37:58

What final recommendations would you leave to the audience? 45:04

Part 1 Summary Extracts

Stephen Ibaraki 00:00

What are two or three inflection points in your life that created this marvelous career as a professor of electrical engineering, but also so foundational in all of the areas of semiconductor design?

Philip Wong 00:58

Philip talks about going to the University of Hong Kong and building the very first transistors for the university. Advisors played an important role which now gives Philip impetus to work with undergraduate students and make sure that every one of them gets their chance to showcase their own strength and go forth in the career.

In graduate school in the US, his professor, Professor Marvin White mentored Philip on academia, how to network with people, how to participate in the technical community. “It was a wonderful experience as well; he has a big research group, and all the students that are in the group have been fantastic, and they become lifelong friends…because of his huge network of friends in the industry, I had no problem finding a job.”

Philip ended up at IBM Research and his first manager helped him a lot in his career within IBM.

Stephen Ibaraki 04:51

It’s an interesting history narrative from early in your life—mentored by these other notable professors and so on into your career, and then also at IBM. But at each point in your career, you’ve been doing marvelous work; you’ve been inventing / creating. And now you’re at Stanford. So talk about what you’re doing at Stanford.

Philip Wong 05:22

Philip talks about his IBM work on high resolution imaging and CMOS image sensors used today. His work on advanced semiconductors.

Earlier, Philip was doing research and then managing a research group at IBM Research to develop the future generations of device technology. He worked even further out in terms of time horizon; at a time nanotechnology was involved in everything becoming small and nano.

Philip went to Stanford in 2004, working on translating nanoscience and technology into practical technology.

His work encompasses transistor or logic technology (example, for computation); memory technology (example, storage temporarily or for long term such as used in smartphones).

Philip maintains very close relationships with many companies such as IBM, Intel, Samsung, and TSMC, and other companies that are basically doing research and development in the field. Stanford has a long tradition in making key contributions to industry.

Stephen Ibaraki 08:52

Translational research is really where it’s at. From an industry standpoint and research standpoint—have some kind of practicality. I know your present research covers areas like carbon electronics; 2D materials; wireless implantable biosensors; directed self assembly; device modeling; brain inspired computing; … at the forefront of non- volatile memory; monolithic 3D integration.

You’re already using RRAM in 3D architectures that’s really tight integration of RRAM and computing components where you get this massive acceleration. And much lower power draw. It’d be interesting to see where that kind of work is going. (Note: faster chips requiring much less power and using new novel materials can address the climate change impact of data centers.)

I know you took a couple of years to be the vice president of innovation for Taiwan Semiconductor (TSMC—world’s largest and most advanced chip fabricator)— TSMC, two or three years ahead of anybody else.

First of all, you’re interested in carbon electronics, and where do you see that going? Presently and into the future?

Philip Wong 11:06

Philip has a long history in this area. His colleagues at IBM in the late 1990s published on the very first transistors made out of carbon nanotubes—sheet of carbon organized in a honeycomb structure and rolled up into a tube (1 to 2 nanometers in diameter) thus the name, carbon nanotube.

Philip works on translating discoveries into practical technology where the transistors can become a contender for the future generations of electronic chips.

This work continuing with colleagues and his students at Stanford; building a whole system out of carbon nanotubes. How would it behave? Is it better than silicon (material used in chips)? Is it a different thing?

DARPA has a program to put new technologies into a foundry and they funded to move this work into a foundry.

“Of course, it’s not the end of the story, … I’m working on right now with my friends at TSMC … we work with their researchers to further develop these technologies into a viable leading edge technology node. And so it’s quite a long journey from maybe the late 90s to today.”

Stephen Ibaraki 18:36

I am really interested in your work with 2D layered materials. Can you talk a little bit about that as well?

Philip Wong 18:53

Philip discusses the whole area with much context.

2D layered materials are a kind of new material in which the atoms are arranged in arrays in two dimensional layers.

As an example, the audience is familiar with graphene. Imagine, if you take a carbon nanotube, tube, and cut it up on all the axes and unrolled it, it becomes a graphene sheet.

2D layered material sheets are very interesting. The electronic properties are different. We want transistors to be smaller and smaller. You pack more transistors onto the chip and you can do more things. The way the physics works. As you get smaller, you will get better energy efficiency, you can do more computation within the same amount of energy being consumed.

There’s a strong desire to string transistors in smaller sizes. You need to make the transistors very thin. And that’s part of the reason why we were interested in carbon nanotubes which is one (to 2) nanometers. For 2D layered materials that are less than a nanometer thick, so it’s very thin.

Philip talks about 2D layered materials being an effective barrier which matters when working with copper.

Why do we need a copper barrier? For right now the whole of the wires in computer chips are made with copper. But copper is very mobile when they move around and they go into the silicon of the silicon chip and mess up the electronic properties of the silicon. You want to have a kind of like a barrier for preventing the copper from diffusing away. Graphene turns out to be a tremendously good barrier. It’s very thin and conductive. That’s important because you don’t want the non-conducting material to eat up space. So they work on the graphene; using it as a copper barrier.

Many companies are exploring this direction right now, starting from their (Philip and colleagues/students) initial investigation many years ago. 2D materials; there are a lot of research work right now going on around the world to try to make these into transistors. When you make them into actual working transistors, they will actually perform well.

Nanotubes and the 2D array materials; you can fabricate these transistors at very low temperature. And that leads to the kinds of research that Philip and colleagues/students are working on very actively today. Namely, how to build to build electronic chips in 3D; in three dimensions.

Today in silicon transistors, you need to go through a very high temperature fabrication processes oftentimes more than 1000 degrees Celsius.

They go through the high temperature processes to make the transistor first and then they put down at low temperature these metal wires and copper wires … and the insulators that surrounds them — cannot withstand the high temperature process. So you do the high temperature process first; to the low temperature process to put down the wires.

But in the future world, if we want to build more transistors on a chip, we need to build devices, on top of each other, just like building a skyscraper. Going up in 3D. And that gives you the ability to build more transistors onto a computer chip. More transistors gives you more functionality. That requires you to select materials to build devices and wires that can be made at low temperatures, so that they can be compatible with each other while you can build it…one layer, another layer without destroying those devices that have been built in the bottom. 2D materials, and the carbon nanotubes can be made at low temperatures. That’s why their research has focused on these materials and devices that can be fabricated at low temperatures.

Stephen Ibaraki 26:35

I’m just curious. You’re dealing with these 2D layered materials. You’re talking about graphene, so pristine graphene has quantum properties. And maybe there could be some kind of extension of that research into room temperature quantum computing. At a fundamental level; I mean, some of the problems still have to be addressed, perhaps on superposition and entanglement. Do you see any of that as possibilities on these 2D materials, specifically, with pristine graphene?

Philip Wong 27:11

Philip speaking more generally as he’s not too familiar with the quantum computing aspects of it.

Philip shares that generally speaking, you really do need materials that we don’t normally use today, to perform these quantum entanglement to build into qubits. And also, beyond the qubits, you need control circuitries at low temperatures. So to build a complex quantum computer, you really need a lot of engineering effort, beyond what we’re seeing today. And all these developments that we see in classical computing devices are going to be very useful. And in fact, even in classical devices, they allow quantum effects. In plain old silicon transistors that we use in computers and phones today, quantum effects are in display every day. We count on those quantum effects. In fact, one very simple example, the memory that we use in storing data, like photos in your phone, they require quantum mechanical tunneling. And without quantum mechanical tunneling, those devices don’t work. So, we are using quantum effects every day, for many years now. So there’s nothing new.

Stephen Ibaraki 28:31

You’re talking about monolithic 3D integration and how you’re working with that. You can see the advantages of low temperature. And I recall seeing an announcement, where you are working on some kind of architecture, with RRAM, which is much, much better than that we have today. And then there’s really, really tight integration on chip with the computing components so that you get this massive increase in speed, but also you get this much lower power consumption. Can you talk more about that work? And I think you’re doing some of that work with Taiwan Semiconductor (TSMC) as well, right?

Philip Wong 29:12

Philip speaks about a very important research trend or technology trend that we have today. And which is the fact that the computing requires not only very high energy efficient and high speed computation itself. But when you do computation, you need data to compute on. Otherwise, what are you computing on right? So you need to add numbers, you need to multiply numbers and things like that.

Where did the data come from? The data can come from the external world. You enter the data, capture a picture and things like that.

The process of building memory devices such as dynamic random access memory, DRAM or flash memory. That is where you store your data, your photos. Those memory devices are built on a different fabrication processes than the logic transistors where you do the computation. And as a result of that, also due to cost optimization, those exists on separate chips.

You have a chip for computing. You have another chip for storing the data. In order to get to data to do the computation, you need to move the data from the memory chip to the compute chip. The act of moving this data from the memory chip to the compute chip, requires a lot of energy and burns a lot of power, and also requires time. In computer architecture terms called latency. How much time do you need to wait for the data to come. Over time, the advances in computation in the transistor logic technology has advanced tremendously, whereas the protocols or the methods to move the data from the memory chip to the compute chip hasn’t improved as fast.

Over time, a gap has opened up. In the sense that you can compute very fast, but you don’t have the data. You’re waiting for data to come. That’s not good.

This is one of the biggest technical issues that the research community or the technical community is solving today, coming up with new architectures. Like for example, building, what we call accelerators. Coming up with new device technology that could solve this problem.

One of the possible solutions would be to build the memory, right on top of the computation chip, instead of having on a separate chip. And one vision that we have is to build this in 3D.

So you have the compute chip; you have the memory elements, or the memory in a different layer on the same chip. So that you’ve got massive connections between the memory elements to the compute elements. You have massively parallel access. Therefore, you can have access to the data in parallel very quickly. In computer architecture delivers high bandwidth, and with low latency, because now you are not getting the data (far away) in a separate chip, but rather right on the same chip. It’s like instead of going to the next town to get your food, you’re growing your own food in in your home. It’s much easier access and much more energy efficient.

Philip and his colleagues/students think that integrating things in 3D will be the way to go. Working on memory devices, that can be built in 3D — all can be made on top of the logic, computing chip.

How do you make use of these good attributes of all these new memory technologies, beyond the traditional ones that exist on separate chips? How do you make use of these good attributes of all these technologies and combine them in a good way, architectural wise to derive the benefits. Part of the research work is working on the memory device itself; improving and coming up with new memory device technology and given good characteristics such as easy to write, easy to read, fast read and write, low energy access, and so on, but also ways to integrate them in 3D.

In combination with transistor technology to give you good computational, high energy efficient computation.

Many of the computation that we’re doing today has to do with AI, machine learning, and doing learning … RRAM on top of a computing chip can do a variety of neural networks from various kinds of neural network topologies. Showing that it is a more energy efficient way to do so. Philip speaks about “the research direction that several of my students, former students have been doing and some of their current students are doing as well.”

Stephen Ibaraki 35:48

I can definitely see the applications of AI on the edge, and especially this aspect of sensors out there onboard rather than going through the cloud, right?

Philip Wong 36:01

Absolutely; sensors onboard. And also, we need to be able to process the data right at the source where the sensor is and to eliminate or minimize the amount of data that needs to be moved from the sensor and to let’s say, the dataset, where main computer is.

For example, let’s take a take a trip to the future and 20 years from now. How do we interact with computing devices? I think we won’t be interacting with computing devices, using our phones, or even our computer. It will be some other things that we interact with. Just like you look back 20 years. Phones were this big, right? Who would have imagined the phone would become such a powerful thing. It’s so small that we all carry them around. Whereas 20 years ago, only very few people had this brick type phones. If you fast forward 20 years, the phones that we have today, probably will look very clunky. People will laugh with us. Who would use these phones 20 years later?

I think those kinds of innovations requires a lot of those kinds of scenarios that probably will come when hardware devices make tremendous advances. Along with, of course, software applications.

Nowadays, you cannot have any hardware devices without software. So the software has to come along with it as well. You have to have both the software and the hardware — kind of go hand in hand. That’s quite important.

And that’s one of the things that I’ve been working with my academic friends at Stanford and also elsewhere. Which is to do improve or lower the energy barrier or access for students to touch hardware. Because I believe that the future world would require not only software, but also the appropriate hardware.

Even in a Metaverse, you need some physical thing to enter the world, the virtual world and what is that physical thing to enter the virtual world? Even in a virtual world, there’s tons of computation required to exist in the virtual world, and who is doing the computation? That is some computing devices. Those computing devices would have to be a lot more energy efficient than we’ve done to computing devices that we have today.

And so educating students in terms of being able to have access to; to experiment with hardware devices is critically important. If you think about it today, students in high school can easily buy a couple of $100 computers, or even go to the library and access the computer and write a piece of software and do something very exciting about it. You could build an app; put up an app and put it on the App Store; you can sell the app as well; even for high school students.

And of course, at many universities worldwide today, at least at Stanford, a vast majority of the students have taken computer science classes and because now being able to write software — understand software has become a language skill; like speaking English or Spanish and Chinese and so on; clearly an important language skill. Software becomes something that almost all students need to touch.

But in a future world; I think the future world which consists of both software and hardware advances and so being able to provide students with access to hardware and experimentations are important. But, hardware access; to hardware experimentation today is has a very high energy barrier. And not only because of the cost of the facilities required, but also because of the way we teach and the way that the current generation of hardware technologies requires a very long learning curve. What are the key questions we would like to ask ourselves is, how do we lower the energy barrier for students to access hardware and touch the hardware so they could start experimenting and start getting exposed to the possibilities? And that’s some of the things that I think; that will make it tremendous if we can lower the energy barrier for students to access hardware, experimentation, or learning how to work with hardware, that will be a really revolutionary change to society. Because right now, you can see that the advances in software has been tremendously rapid, but pretty soon they run out of steam, because the hardware is not catching up; you need to have both working, advancing at the same pace.

Stephen Ibaraki 41:27

I just want to touch on one other aspect. This idea of incorporating a system on a chip. If you can do this on your monolithic 3d integration, with analog components in there as well. Analog, combine with digital, combining the latest memory technologies, and you’re getting the massive parallelization occurring. Because it’s all on chip.

Are you working in that area? So you’re getting more of a system? And not just the digital, but also the analog? Where do you see analog contributing to the overall solution paradigm that’s out there?

Philip Wong 42:11

Absolutely. You point out the key points.

Historically, the events in the past, the advances in device technology seem so rapid. Basically, you discover something, or you bring something to the market, and people figure out what to do with it. And because the advances have been so tremendous; the stack has been so big that being able to find applications is clearly no problem at all.

More recently, I think the way technologies are being developed is, instead of being more driven from the bottom, it’s more driven from the top. What I mean by that is, that we start from what we want to do.

Say you want to enter the metaverse, or this is what you want to do. Then you go one step down and ask, what kind of software or hardware? When do I need to enable me to do that? You want to have a VR headset? What does that mean? You’re allowed, that means, has to be small, it cannot be clunky. That means low power consumption. You ask, what can I do to achieve that?

So it goes down, comes down from the top from what you want to do, and then filter down into what are the requirements? What do I need to have? What do I need to develop in terms of hardware and software technology to realize that vision or that application? Today, the system view is very important.

Any new device technology, that one would advocate for, has to have a system view. If I have these new devices, where does it sit into a system? Into a prototypical system? Where is that being used?

How would using that give you a better system performance? Not just different; it’s different from, but it’s not the same as being better, right? So how is it going to be better than what could do otherwise?

And those system level analysis is critically important. And that’s what we ask our students. To pay attention to and oftentimes do their research on. They need to assess and characterize devices in such a way that you can then go one level or several levels up to to assess whether when you build a system like that one. What kind of benefits do you get? And that’s pretty important.

Stephen Ibaraki 45:04

We’re down to our last question.

I could spend half a day, just mining all of the work that you’re doing, and just so fascinating and really profound. It’s going to impact the world. And it’s for the benefit of humanity and Earth ecosystems.

I wish we had more time to get into the wireless implantable biosensors; your directed self assembly; device modeling; your brain inspired computing. [We agree to another interview which is provided below.]

What final recommendations would you leave to the audience?

Philip Wong 46:25

Thank you for asking this question. I think there are two aspects, I would like to say in a final comment.

One is, in the future world, technology is going to be clearly important. As you’ve seen in recent times, that economic development and economic security and path towards better lives for everybody in society, clearly hinges upon technology development, advanced technology, and much of that counts on computing technology.

For example, computing technology is able to spread out in many other areas…is the same skill set and the same fabrication technology that one would use for, let’s say, biomedical electronic devices, implantable brain implants, and so on. And also, for example, in battery technology. Many of the skills, basic sciences and also even practical process technology can stem from the research in computer chips. And so, semiconductor technology is really a foundational technology for many of the things that society does. I would expect that will be even more important as time go on.

You hear about chips shortages, and so on. But that’s only a tip of the iceberg. The real reason is that this is a foundational technology that will be carrying the weight of many of the technology advancements and economic advancement that can stem from that going forward. That’s a very important aspect.

Second is, we really need to find a way to lower the energy barrier for students access to hardware technologies in general…building systems. That is either computing systems, robotic systems or other systems. Because the world cannot just exist with the software technologies. You cannot run today’s software on a phone that was 20 years ago. It just doesn’t doesn’t work, right? And I found that out because I want to keep my phone. I want to use my old phone, but I cannot because the software gets updated (or not, when the hardware is too old) and my phone gets so slow. Well, eventually, even doing poorly is perfectly good; but inoperable. So you really need to have hardware technology go hand in hand with software.

I think if you are CEOs and company executives, it is important to realize that even though in the past decade or two, we see tremendous advances in software applications. Going forward. It’s more like a pendulum. The pendulum will swing back. And eventually the hardware technology would come into a kind of bottleneck before advances in technology. And so we need to find a way to make sure that the hardware technology also advances as rapidly as software technology that we’ve seen in the past 20 some years. And so that’s kind of like an important trend.

Broadly speaking, not speaking in terms of specifically on some of the technology, but broadly speaking, hardware technology has to really go hand in hand in the future, to help us move forward. This is so foundational to everything that we do.

If you look at the United Nations 17 goals for sustainable economic development, a good majority of them are counted, maybe like 13, or 14 of them depend on advances in technology or computing technology in general.

So that’s a very important part and the society really would benefit greatly from advances in these fields.

Stephen Ibaraki 50:54

I actually helped work on the 17 Sustainable Development Goals and the Millennium Goals before that in 2000. I would say all 17 and really even foundational to that is your work, because you’re like the atoms that make up the molecules that make up materials.

Your work is even foundation to everything else, because you work at such a basic level and building block level. The semiconductor technology drives everything else, ultimately.

Your work is just outstanding. Thank you for coming in and sharing some of your insights with our audience. Your work continues to impact the world for the benefit of humanity and Earth ecosystems, and will be felt as inflection points throughout our history as well. So thank you again for coming in.

Part 2 Outline

Interview with Philip continues. 0:00

What if you could insert your biosensors within these organic spheres and then even within the actual cells? 7:16

Directed self-assembly is not just self-Assembly. 12:55

What are some examples of models that have been used in the field? 19:17

Do you have a sense of what the percentage is for device modeling to work? 23:57

What is the next level of research in this field? 31:14

Where do you find all the computational power to train students? 39:29

The importance of having a national infrastructure to enable manufacturing. 44:11

What is the career path of students in academia vs industry? 51:46

Diversity and Inclusion in the industry. 1:01:17

The semiconductor industry is a once in a lifetime opportunity to make an impact to the world. 1:08:30

Part 2 Summary Extracts

Stephen Ibaraki 00:00

Philip, thank you for coming in for part two of our interview. Your work is just outstanding, solid, foundational. Not only today, but also many years, over decades in the past, and all of the inventions and the creation you’ve done. Also into the future.

Semiconductor technology and the foundational work you’re doing really drives everything else.

So that’s why I wanted to get more about where your work is going. We just didn’t have enough time in the first interview. We’re going to continue in that journey.

Philip Wong 00:38

Cool, happy to be here. Thank you.

Stephen Ibaraki 00:41

We talked about your carbon electronics. You been doing that for decades. You are realizing that and to fruition — working with foundries. We talked about 2D layer materials, non-volatile memory, monolithic 3D integration.

Let’s continue in the work you’re doing. The next topic is on wireless implantable biosensors.

Philip Wong 01:24

This is a kind of like a pet project that I’ve been working on for a number of years with my collaborator, Professor Ada Poon at Stanford. She’s a wireless expert, and has been applying her skills to not only wireless communications, which many people may be familiar with, but also the use of wireless communications in biosensing and communications with biomedical applications.

Perhaps you may want to interview Professor Poon. She’s done fantastic work in this field.

So this pet project, started kind of, in an interesting way. I met somebody from the medical school, in the Bytes Cafe, in our department headquarters in the Packard Building.

And, I asked him, what is he doing? … (He answered) He is working on cells and things like that.

He asked me what I’m doing? (Philip answered) I make chips and chips are very small.

So I started asking him, how big are your cells? (He answered) Cells are maybe like 10s of microns, sometimes even bigger.

(Philip answering) Wow, 10s of microns is really huge. I can literally put a ton of transistors inside your cells.

(He answered) What, is that possible?

So that started that conversation and then I realized that the nanofabrication, it’s so advanced, that nowadays, that you can do devices, make transistors, memory, wires, communications so small compared with the size of a cell. You can actually fit a whole chip inside a cell.

And, of course, I’m no wireless expert. So I found my colleague, Ada Poon, who also found that idea intriguing. And so the two of us and also the person from the medical school…, (and other collaborator(s) such as Professor Michael V. McConnell, started working on this topic.)

How do we make put a chip inside a cell? And we research the literature and find that nobody had done this before. There have been electronics in an organ for example. You probably heard about an endoscope pill camera; you can swallow the pill and you can take pictures inside the gut. But those are at the organ level. And, you can also have things that exist around the tissue level. Like for example, you can have brain implants that are right next to tissues and things like that. But essentially, nobody has done anything to put electronics inside a cell.

Sponsored

So it’s like, wow, that’s interesting. Let me give it a try. We started working with Ada Poon on designing chips that can go inside a cell and then also wirelessly communicate with it. Of course, once we put it inside a cell, you cannot kill the cell, otherwise it defeats the whole purpose, or why do you want to put it in there. So we find a way to put chips inside a cell and actually monitor how the cell would behave after the chip goes in there. We actually did a lot of cell imaging to determine that the cell functions kind of normally, and the cell can also divide. So, through the normal cell division process, of course, the cell divides, that the chips don’t divide by itself. That would be another miracle. That’s the beginning of the project.

We are at a point where we think we could then put a chip inside the cell and communicate with it. The next thing we want to do is to affect the behavior of the cell. For example, having a means to maybe release a drug or something like that, remotely wirelessly, and so that the cell behavior would be affected. That would be, sort of a means to do biology; would kind of change the course of biology how the cell works with electronic means. Previously, the way we affect how cells behave has always been through chemical signaling, chemical means, about biological means, but have never been through electrical means. So having this electrical means perhaps, would open up a new way to interact with biology at the cell level, and perhaps help us find more information about how the cell functions. And so that’s kind of like the motivation of the work.

Stephen Ibaraki 07:16

That’s really a such a novel application, because you have such expertise in nanofabrication. When this other researcher talks about a cell size, you’re thinking, well, that’s actually quite big. It’s very big. You could easily put in chip-like structures in there and things like that. I mean, this is precision medicine, this is definitely personalized medicine as well. The combination and translational, which means you could do something useful. So I think that’s amazing.

I mentioned earlier, I just came off an interview with the chairman of the Terasaki Institute for Biomedical Innovation, and they’re working at sort of this micro level. I think your work would be so perfect in the work that they’re doing anyway. I’ll mention your work to them. Because they’re working at at these very small scales as well. But you’re even smaller.

Philip Wong 08:22

Yes, currently, we’re working with another professor at UCSF, Professor Wallace Marshall. Our goal is to demonstrate that the chip can release a drug and change how the cell behaves. That’s the goal of this research. I think that, if we’re successful, it will bring about a sea change in how we can interact with cells. That’s previously only through biological or chemical means…but now we can do it too, remotely using electrical means.

Stephen Ibaraki 09:03

Let’s get to directed self assembly. Can you talk about that work?

Philip Wong 10:27

Philip shares about the work at IBM where two researchers who picked up this problem of using self assembly to pattern small features. When Philip came to Stanford, he continued the research. But he started in a different direction. Philip describes the work in some detail and industry has now picked up this research area at a scale (example: 300 millimeter wafer) thus best suited for industry research.

(It’s an interesting and detailed narrative thus recommend the readers go to the video interview.)

Stephen Ibaraki 18:18

Your work laid the foundation for this movement in industry—to move to this entirely, much more precise level, especially as you’re getting smaller and smaller. Just transformational work and really setting the industry into the future. You also have this area of device modeling. Can you talk about that?

Philip Wong 18:43

Yes. Device modeling is very important for device technology. Devices refers to transistors, memory, and so on. We often say that, if you don’t understand the physics, you cannot model. So doing modeling is more than just trying to produce a model of the physical world, but really, to have a very deep understanding of the physical system that you’re working on.

So we’ve done models on transistors with a model on memory. For example, students who work on transistor models or carbon nanotube transistors requires us to have a deep understanding of the physics of the carbon nanotube, but also stemming from the understanding you need to be able to abstract that very complex physics into a simple enough model that, number one, captures the physics and, number two, are simple enough so you can use it very easily so that you don’t have to run a simulation for three days to get an answer. That’s getting the speed and also the accuracy are important.

So we’ve done computer models for carbon nanotube transistors and all models are open source out in the web. We put out models in a National Science Foundation supported hub called nanoHub, where this has been a great repository of models, and we put our models up there.

Philip talks about the models being downloaded by tens of thousands and cited in research. They have also done a model on RRAM (resistive random access non-volatile memory) in collaboration with others including understanding the physics of the resistive switching event, and then translating and abstracting that understanding into simple equations. Those models are also used by many people in the field, including some companies using these models for their own products development.

Stephen Ibaraki 21:46

Speeds up the research time, when you have models.

Philip Wong 21:53

Yes it does because you need to be able to not only work at a single device level, but also be able to string those devices together in a circuit and in a system.

Without a model of the device, either the memory or the logic transistors, you will not be able to know what the circuit will look like, or how this whole system will perform.

For example, in carbon nanotube transistors, at the transistor level, we know how much current drive you have, and how much power you consume in a single device. But then the next question is, okay, you have transistors working in a system. Let’s say, you build your entire iPhone out of carbon nanotube transistors, what would be the system level impact? Is that going to be a longer battery life? Is it going to be faster, and so on?

These models help you examine the system level impact. We actually use them, along with our industry partners to understand, to assess, if you did this; make this kind of device possible, what would system performance look like? So that’s one question. The other kind of question is, in the other direction. If I change this device design, if I changed the way I designed the device, how would it affect the system level performance? And that’s very useful for the device technologists, because we need some guidance about what to do.

Of course, we have some inkling through our understanding of the device physics, and how systems are designed; we have some inklings about the impact it may have. But then, to translate that inkling into quantitative numbers requires a model and a system that simulates that.

Stephen Ibaraki 23:52

I can see how this would improve the efficiencies of the fabrication process, right? Because you’re modeling prior to fabrication and so you have a pretty good idea if it’s going to work or not.

Do you have a sense of what the percentage is when you do all of this modeling?

You have these design tools? They’ll have some of this embedded in it as well. And then when you fabricate, you have a pretty good idea it’s going to work. What is that percentage? Or can you quantify it in that way?

Philip Wong 24:19

We started with talking about modeling. But I guess your audience may be familiar with today another buzzword called digital twin. And that actually is a digital twin; is a digital description of the physical world, right? It’s basically a digital twin.

Being able to make this model or create a digital twin is clearly important in terms of speeding up the development process. You can try things out very quickly. Also, screen out a lot of ideas that you think, maybe that’s good, but actually, it’s not. You will save a lot of time to screen out, things that don’t work.

Of course, you would never find out things that actually would work, by just modeling. You would have to build the real thing. It’s just like, you will never find out whether a product is well accepted in the market until you actually push it out. But you can do some estimation. You can also do some focus group and know that this is not going to work.

So the modeling is the same thing; digital twin is actually the same thing and helps you screen out some of the possibilities, that obviously won’t work. But it won’t tell you what will work. It will help point you to the right direction, but then the detailed work has to be done in experiment.

Stephen Ibaraki 25:58

There’s this integration of AI, when you actually lay out what’s going to happen when you fabricate. So between the two, you can improve the results, right?

(Philip takes into the broader context of modeling the world.)

Philip Wong 26:13

So this digital twin idea is just a kind of like modeling on steroids. You’re able to model the entire world. That’s where the society is going. That’s where technology is going.

We are at a point where computers are extremely powerful. So even though you may have very complex equations over complex interactions between different parts, because oftentimes, the complexity of the model is in complexity of the interaction between different parts. So you have basically the entire world you want to model and there are so many moving parts. The problem becomes extremely complex, but a portion is the computing technology has advanced tremendously. You want to be able to model a good part of the world, in the future. In the very near future, I think.

Stephen Ibaraki 27:15

Transformational work definitely drives all of the escalation we’re getting and the progress in brain inspired computing.

I’ve been keynoting on things like brain-inspired computing. You’re doing such foundational work in this area. So could you tell us about your brain inspired computing work; neuromorphic computing?

Philip Wong 27:49

They are a kind of phenomena. I don’t know what the distinction is, but roughly speaking, a different kind of computing inspired by how the brain works. How much you adhere to the reality of how brains work is a subject for dinner party discussion. But generally speaking, it is a kind of doing computing inspired by how the brain works.

I came into that quite a few years ago, when the DARPA started a project called Synapse, and some of the audience may be familiar with this. And this Synapse project has many teams participating in it. I was part of a team from IBM, and by IBM Research, and they eventually produce a chip called the TrueNorth.

I wasn’t in the part that designed TrueNorth. But within the bigger project of the Synapse project, there was a part that tried to develop electronic elements that emulates the functions of a synapse in the brain.

The brain consists of synapses and neurons. Synapses are basically biological structures that weigh the importance of the input signals and send them off to the neurons, which then sums up inputs from many synapses and determines to fire an output pulse or not.

The brain has many, many synapses. The brain has about 10 to the 10 neurons and about 10 to the 14 synapses. So there’s a lot of synapses.

The idea is that if we were to emulate how the brain would do computation; that you will need a lot of these synapses. Since you need a lot of these things, and they must be very small otherwise it will be a very brain, right? So it has to be very small.

The IBM team and me and several other professors in the bioengineering department at Stanford (and students/post-doc students). We set out to build an electronic device that emulates the function of a synapse, and there are some high level abstraction of what a biological synapses synapse would do.

I took that abstraction, and say, Okay, can I make an electronic device that produces the same behavior as that abstraction, at that high level of abstraction. After some time, my students and the postdocs are able to produce that. That was kind of like the beginning of a field of research, in which we would like to find an electronic synapse, electronic analogs of these biological synapses. Of course, the first version that we built is just an example. There are many other physical systems that could that potentially could do the same thing.

That was the first work that show people that, hey, you can use electronics to reproduce this high level abstraction, what the biology would do. And then other people would come in and say, Hey, you do this with this device, maybe my devices can do this even better. So there’s a lot of these kinds of work going on, for years. And basically, people are competing to say, Hey, this is a better device in here.

So we continue to working on different types of devices. And the next level of research work beyond the single device is to say, Okay, we have one synapse. The brain is 10 to the 14 synapses. The synapse has to work together with other neuronal functions…fire and so on how to activate them.

The next level research work is how do you string together a bunch of these synapses as large as possible, and design circuits around it so that they would do actual functions. That would be useful for us; to maybe do image recognition, do language identification, that kind of thing. And can you do something like that.

So the next level of work is to string them together with circuits and so on. And that’s what we’ve been doing for the past few years. Putting / integrating these electronic devices together with computational elements, and integrate them with a big chip, and to show that you can actually perform, for example, some neural network function.

And one of the more recent work we’ve done, which was published a couple of months ago in Nature, is a chip that has three million of these RRAM integrated with compute logic, built with CMOS transistors, and that could perform a variety of neural network topologies and functions, and to do tasks such as natural language processing, or image recognition, and image recall; that kind of tasks.

That’s the kind of work that we have been doing in the last few years. Basically take the individual electronic devices that emulate biological functions and put them together into bigger systems.

The goal is really twofold.

One is to find out whether this method of computing is actually more energy efficient than the way we’re doing things today. Obviously, this is a method that is different from what we’re doing today; example, computers and cell phones. But as I said last time, being different is not the same as being better. So we need to show that it’s actually better. And the story continues today. I don’t think we haven’t answered that yet. The story continues, and there’s a lot of research work in the community to figure out whether this new way of computing is actually better than the conventional way of computing. That’s one direction of research.

The other direction research is really to figure out; well, what are the non-idealities of the devices that we make. All devices have non-idealities; so not perfect. Whether we can mitigate those non-idealities to make the system function better. And so that’s a combination of device level advancements are improving on the device itself. Understanding the physics and therefore improving the device itself or finding a better device. And then, of course, combine that with design and an algorithm (codesign), to make sure that we capitalize on the benefits of these devices, and at the same time, be able to mitigate the non-desirable aspects of each of these systems.

Stephen Ibaraki 35:51

I’m just thinking, the basis of a lot of this work is; long time you’ve been a proponent, and a pioneer in phase change memory and metal oxide resistive switching memory (RRAM). You / your students developed the open-source RRAM SPICE model that is used extensively by academia and industry (> 6,000 downloads on nanohub.org). Your research group realized the first neuromorphic electronic synapse device based on phase-change memory, a foundational work that is routinely cited as the first work in the rapidly growing field of brain-inspired computing. You pioneered using RRAM for neuromorphic computing, with works ranging from device characterization to neural network system demonstrations. Your work on non-volatile memory modeling and phenomenological understanding, scaling, and neural network system demonstration has inspired significant academic and industrial activities and investments. RRAM is now a product technology from several large commercial foundries.

Your pioneering neuromorphic computing; this idea of integrating electronic capabilities that emulate/simulate what’s happening in the brain, and you’re able to solve problems using using this hardware. So I think it’s quite interesting.

Philip Wong 36:35

Yes, but so far, we’re just solving toy problems, right? We’re very early stage of the game.

I just want to kind of take this opportunity to energize your audience in saying that the opportunities going forward is tremendous.

A brain burns about 20 watts. The computer that won the GO game. Google computer, that won the GO game, burns, I think 170 kilowatts. Several orders of magnitude off in terms of energy efficiency, and so on. So this is a tremendous opportunity going forward. I’m not saying that this is the direction that we need to go. But then there’s tremendous opportunity to be had, because there is existence proof that you can do computation with 20 watts. That’s pretty powerful computation.

Stephen Ibaraki 37:30

Yes and we’re running up against these power problems, right? I mean, data centers consumes so much power and then keep continuing scaling on the cloud.

There isn’t enough power in the planet.

There’s talk of the metaverse, and the metaverse requires so much computational resources, that again, you’re going to have paradigm shifting innovations. And really, you’re leading these foundational paradigm shifting innovations.

Philip Wong 38:02

Indeed, the way we’re doing things, at least at this point is not scalable; to a future world in which we expect technology to solve many problems.

Just to give an example. A study from MIT, shows that to train a language model that we use in Siri and things like that. To train a language model consumes; produces the equivalent of carbon emissions, of a hybrid car, that has 50 miles per gallon going around the entire Earth for 400 times. That is the amount of energy it consumes to train a model. And that’s only one model.

This is all for today’s model. It does something useful, but way less than what we expected you can do.

For a future world, we have a Metaverse and so on, we really need tremendous improvement in energy efficiency in computation. That’s so foundational to everything that we do. If we look into all the things that we wanted to do. They were all limited by the energy of computation. And so that’s finding solutions to that is really important foundational research that needs to be done.

Stephen Ibaraki 39:29

You talk about the language models. I mean, GPT-3 is 175 billion parameters, but models are trending towards exceeding a trillion.

Philip Wong 39:44

Yes, so where do you find all the computational power to train the model and to do inferencing and so on. I was joking with another friend of mine the other day. We talked about metaverse, m-e-t-a, metaverse right, like in metaphysics, but I think we should also talk about matter-verse, m-a-t-t-e-r, physical matter

Stephen Ibaraki 40:17

Yes. Your work on non-volatile memory modeling and phenomenological understanding, scaling and all of that as part of the solutions, right?

That’s why I think your work is just so foundational, and it’s really inflection points throughout your entire career from your research at IBM and then continuing afterwards that you always leading in many respects.

I want to shift gears a little bit. You have held leadership positions at major multi-university research centers of the National Science Foundation and the Semiconductor Research Corporation.

You are the founding faculty co-director of the Stanford SystemX Alliance – an industrial affiliate program with 38 member companies to collaborate on research focused on building systems.

You are the Director of the Stanford Nanofabrication Facility – a shared facility for device fabrication on the Stanford campus that serves academic, industrial, and governmental researchers across the U.S. and around the globe, sponsored in part by the National Science Foundation.

Can you talk about that work and why you’ve taken on these leadership roles? And what do you hope to gain for the benefit of humanity and really further in the field by working in these leadership roles with these alliances and groups, and so on?

Philip Wong 41:20

Thank you for bringing this up.

Let’s talk local at Stanford and we branch out further.

Locally at Stanford, I have taken in recent years, two kind of responsibilities.

One is, I’m the director of the Stanford Nanofabrication Facility. This is a shared facility on campus. That’s shared, not only around the campus users, but also around the world. Anybody who wants to come and use it, can just pay up and then can use it.

And we don’t own any of your IP or anything like that. You just come in and do your own thing.

We provide this infrastructure for people to do the research on nanofabrication. From computer chips to microfluidic devices for biomedical devices, or making strong nanostructures for batteries, or building solar cells and things like that. Anything that requires the use of these, generally speaking, semiconductor manufacturing infrastructure, fabrication infrastructure.,

The Stanford Nanofabrication Facility is probably one of the first … shared facilities, I would say, at least in the US, perhaps in the world as well.

It was started back in the 80s, when I was still a graduate student. It was one of the pioneering efforts of the previous generation of faculty at Stanford, who came up with this idea that we need a shared facilities to serve the broader community of people who can fabricate things…because many of these fabrication processes requires tools that are rather expensive. That’s number one. Number two, requires very knowledgeable staff to develop processes and keep the tools running. And also, not only they have to pass on the knowledge of the fabrication processes, from one generation of students to another generation of students—one thing that constantly happens in universities that we have a constant flow of students, by the time the students learn how to do things really well, they graduate. So we’re always dealing with students that do not have the skills. We just train them and then they will develop the skills as they move along in their education.

So having this infrastructure is tremendously useful, because then many of the researchers do not need to own this very complex infrastructure. But then they can use that to do their own research. That’s one of the key points about these shared facilities.

And since that time, since the early 80s, when Stanford started this nanofabrication facilities, many universities around the country and also around the world, have developed these kinds of facilities as well.

And today to look into, in the broadest scope, what people will be hearing from the CHIPS AND SCIENCES ACT; that they’re talking about a national infrastructure to enable laboratory to (semiconductor) manufacturing, what they call lab-to-fab…, it was just typically called a pair…Bigger national scale that companies and researchers that have good ideas that needs to be proven out; requires a rather complex and expensive infrastructure to prove out the ideas. Instead of everybody building their own thing, we can just come together in a common facility…well kept with knowledgeable staff to prove out the idea.

That’s starting from the 80s on this shared facilities on the Stanford campus, to today’s national infrastructure that our President Biden is talking about. That’s an evolution of several decades of basically, the semiconductor industry going from very simple laboratory type demonstration to breeding, much larger scale manufacturing enterprise. So that’s kind of the short story of the Stanford Nanofabrication Facility.

And as a faculty member here, I have myself benefited from the use of this facility for many years. And so when the Dean asked me to take up this responsibility; I say, Okay, it’s time to give back. And so I picked up this responsibility.

You mentioned about the Stanford SystemX Alliance, which is also quite interesting.

Many years ago, also back in the 80s, at Stanford, several faculty member came up with this idea of a center — they called it the Center for Integrated Systems. That was back in the 80s. And at the time, systems were much simpler. Center for Integrated Systems started out as an industrial PhD program. So to have companies come in as members, and our faculty members, and the students would show them the research work that we’re doing. And have a dialogue. Really a dialogue between industry and academic research so that the students can learn from the company members, what is important? What are they worried about? What is their product direction? So that they could then shape the research in a direction that is relevant to industry. And that’s very important, because for engineering research, then engineering practice is about practice, about making an impact broadly across the society, so that you don’t do engineering research without any knowledge about what companies are doing. That’s kind of like, really an ivory tower. We don’t want to be an ivory tower. And so we engage the companies in this way. And has gone on for quite a number of years.

It’s a reflection of how technology goes. In the beginning years of semiconductor technology, a lot of the advancements were driven from the bottom. You have a new device, you have a new device technology, then the application developers, Okay, we have a new device, what can I do with it?

They start with the bottom, the new device technology that came about, and then the people will build applications around it.

In more recent years, the world is somewhat changed, in the sense that it’s flipped. Flipped the other way around. In the sense that we oftentimes now start with what you want to do. From a user application point of view, do you want a VR/AR headset? Do you want a better phone? Do you want a self driving car? Do you want robots to go around and do things for you?

Start from that. And then we ask, what kinds of technology do I need to develop to enable that? That’s a different type of thinking. So back in around 2015, or 16, Professor Boris Murmann and I think that it’s time to rework our thinking in this industrial affiliate program. We realized that we are more engaged with companies that actually build the customer facing systems. Companies such as Apple, Google, Facebook, Amazon, and so on and so forth. Those companies who have built customer facing applications. From there we will then derive the needs for all the technology downstream, that is foundational to that application. So turn this around.

We’ve revamped the whole industrial PhD program called it, SystemX Alliance. And start with the word system,…we call it an alliance, because it is really an alliance. It’s not a center of something, because the center of something kind of connotes the notion that I’m the center of the universe, you all come here, which is not true. It’s really an alliance between companies, industrial practitioners, and the research work that’s going on campus; it’s an Alliance. So that’s why we call it SystemX Alliance. And by now, this has grown into probably one of the largest industrial PhD programs on campus, and perhaps even around the country.

We have 38 member companies. Companies from materials suppliers, (example) Mitsui Chemical, to companies that build systems, (examples) Google, Apple, Facebook, and so on.

All this entire system staff from the basic materials to the customer facing applications; I think that’s where the engineering research will be going in the future. A lot of the technology development will be inspired by the eventual applications, and we have to stay for engineering research; we have to stay very close to what the end user application would call for.

Stephen Ibaraki 51:46

We touched on this in the first part of the interview, Part One, where you were talking about looking really at the application layer, and then going, sort of reverse, down into the system on a chip, where the system on a chip really encompasses the needs driven from the application layer. So that’s an interesting perspective, because then it’s very efficient as well, right? I mean, you’re really tailoring to what the requirements are. So, again, I think it’s just amazing work.

I want to cover two more chunks within your marvelous history. One of which is you took a leave from Stanford from 2018, to 2020. You took on a senior executive role at Taiwan Semiconductor (TSMC). It’s the largest foundry in the world—very famous. Years ahead, other foundries out there. And you still remain as chief scientist. That’s just fascinating work.

What really drove your interest to take on a role like that? To continue to help in the innovation of this foundry. From what I understand, they’re building complementary research, or foundry in the US, right? And because of the US CHIPS ACT (US CHIPS and Science Act), which I think it’s such a foundational Act in the United States. Any comments?

Philip Wong 53:28

I can just relate a short story from yesterday.

I was sitting with several students. We have a regular kind of faculty / student get together and lunch, and for students and to ask any questions you want. And for faculty, I was sitting at this lunch yesterday.

One question, came up; Hey, what is the career path of students? Whether it should be academia or industry?

One of the comments that one of my colleagues said is that; Well, we see this at Stanford; we see a continuum.

I mean, we go back and forth between two places, very kind of fluidly, without any barrier. In fact, you can see many, many examples. Faculty, either found companies and they go back in your companies; have many examples already. I don’t need to name them; you all know them. And many of them, after they found the companies; to come back to become faculty and do the academic research and the applications as well.

This kind of going back and forth between industry and academia is perhaps one of the more distinguishing features of our engineering faculty here.

I think that is helpful in both directions.

Number one. Industry research really counts, especially now, in bringing new ideas. Conventional ways of doing things or doing things as usual, business as usual; it won’t get you anywhere, to give you a more visionary type product. But rather, a lot of times, academics will bring new ideas to companies.

At the same time when in the opposite direction. When a faculty member comes back from engagement with industry, he or she will bring back a wealth of information from industry about what is relevant, what is / where are the pain points?

This knowledge is then translated to the students through the classroom or through their research projects, and so on. It’s kind of like, a very beneficial interaction in both directions. Having this flow between academia and industry really helps in both directions.

Going back to the earlier question about my sojourn in Taiwan. It’s a very interesting personal journey.

I went to Taiwan and worked at TSMC for two years; to be exact one year and nine months. On leave from Stanford. Most universities allow faculty to be on leave for a limited amount of time; Stanford for two years.

So when I took up that position in Taiwan; I had been at Stanford for 16 years already, without taking a break. And that’s kind of what you do. Figured that maybe I should take a break and learn something new. And because technology evolved very fast, very rapid.

Between the time I left IBM, that was more than 16 years since—technology has moved tremendously. When I left IBM, I was very familiar with all these semiconductor technologies inside and out. Because I was at the forefront. I was the one who developed the latest — more future generations of technology. So I really knew what the industry is doing.

But 16 years after that, I’m not so sure I still know too much about it. So it’s time to get back and understand it. That’s part of the motivation. Your motivation really is an opportunity to make a difference. Because most academics like to make a difference.

My mission at the company was to help them develop their research organization. To develop a new product, you have to really do front end research work, which then becomes translated into actual pathfinding and product development, type work.

TSMC at the time had been extremely successful in pathfinding and product development. But they realize that because they are now at the front end of the pack, in terms of technology, that it would be necessary for them to have a foot in to have further headlights into the future. It’s more just like driving in a fog. You need to be able to see further; to allow them to stay at the forefront of technology. One of my missions was to develop; to help them build a research organization. I find that rather intriguing. That to be able to help a kind of world leading company at a time we’re already at the beginning of their world leading position, to think about how to build a research organization.

Now, if you think about industrial research, you can think of Bell Labs, IBM, and RCA of the previous era and then you have TI, HP and all these very prominent, …, Hitachi and Toshiba, all these very prominent industrial research labs.

Now of course, when you build a research organization for this particular company, times have changed. That the era of Bell Labs … monopoly; then IBM with a monopoly of mainframe computers. Those are gone. You cannot really build a research organization, in the same way of that Bell Labs and IBM built a research organization. It has to be something different.

Now, the question is, what is different? What is it supposed to be? That seems like a very challenging question to me. And so I find it quite interesting. I took up this task, and then went over to Taiwan and started building that research organization for them. That was really a fulfilling and rich experience.

After two years, I have to come back to Stanford, otherwise, I lose my job at Stanford. I love my academic job as well. I come back to Stanford, and I continue to interact with them in a consulting role as the chief scientist to this day.

Stephen Ibaraki 1:01:17

Yes and because of all the other work that you do with these major companies and materials companies, through the Alliance and other work. You graduated 48 PhD students and among them 24 are women and minorities. So you’re very much a champion of diversity and inclusion. But, because you’re building all these pathways, it provides an opportunity when sometimes these companies will say, Do you have a student you can recommend or something like that, right?

Philip Wong 1:01:49

Yes. I’m pretty proud of the fact that I can build my research group with more than 50% women and minorities. I’m pretty proud of that.

I think it took some effort in the beginning to create an environment in which people who are not an obvious part of the system, and over time. As I was telling my colleagues that I don’t feel my research, or my career has been pulled back in any way at all. By having these engagements with women and minorities. In fact, I learned more from them about organization and how to interact with people. And it is really lovely to have these students. And through this example, I show that it is possible to build a very vibrant research program with attention to diversity and inclusion.

Stephen Ibaraki 1:02:54

Yes. It’s such a great model for the world to follow.

Now, I’m just going to set this up, because I’m going to go into the awards and then that would be the last part, maybe some additional recommendations.

Your work contributed to the advancements in silicon CMOS (complementary metal oxide semiconductor) scaling, and carbon electronics. We talked about carbon nanotubes to 2D graphene and non-volatile memory and so on.

And you pioneered these concepts and things like channel geometry and multiple gate electrodes to control short channel effects and enable transistor scaling to nanometer scale for 3-nm node and beyond. Your work elucidated the design principles and demonstrated the first nanosheet transistor, thus pointing the way toward continued device scaling beyond what was considered possible with a conventional bulk silicon transistor. This device concept is the basis of modern transistors used in high volume manufacturing such as the FinFET and the nanosheet transistor.

The list just goes on and on….with phase change memory and metal oxide resistive switching memory RRAM.

It just so much foundational work, leadership work, foundational industry work.

This alliance, we talked about, the Stanford SystemX Alliance; and the Stanford Nanofabrication facility.

Your work with students and just so much translational but also transformational in terms of the world.

And because semiconductors are so foundational, but you won awards. And so I just want to talk about some of those awards. You are a Fellow of the IEEE and received the IEEE Electron Devices Society J.J. Ebers Award for “pioneering contributions to the scaling of silicon devices and technology.” This is the IEEE Electron Devices Society’s highest honor to “recognize outstanding technical contributions to the field of electron devices that have made a lasting impact.”

The reason I’m interviewing you is the IEEE organization reached out to me to celebrate historic achievements recognized by awards.

You just received this top award, the Andrew S. Grove award. This is the highest award recognizing outstanding contributions to solid-state devices and technology. The award criteria covers: field leadership, contribution, originality, breadth, inventive value, publications, other achievements; society activities, honors, duration, and the quality of the nomination.

The list goes on—all your contributions.

Can you talk about those awards and where do they fit in your career? What do you hope that the awards will enable you to do? Even perhaps it’ll help amplify some of the things, key concepts that you want to get out? How do you integrate that into your career, these awards and recognitions that you’re receiving?

Philip Wong 1:05:22

Thank you for bringing it up.

First of all, I would like to emphasize that, even though awards tend to honor a single person or persons. It is really a collection of people and collaboration work that led to the recognition of the technical work. (Note: Philip names his collaborators throughout the interview and you will find them in the full video interviews.)

I should emphasize that I’ve been helped by many of my collaborators, friends, and colleagues at IBM, at Stanford, and also worldwide. Because this is, well, in many cases, a worldwide collaboration. I should really emphasize that even though I was singled out as the reward recipient—it is really my friends and colleagues and collaborators that really make this happen. And I really appreciate that. I wouldn’t be able to call out all the names because it will take a long time. And I’m sure that you know who you are. And that I really, thank you for the collaboration.

As far as the how that fits into the career. I think it is a manifestation of, I think most awards kind of fall into this category, that things that you’ve done at the time. It wasn’t that clear that it has some long lasting impact. But it will take years afterwards to realize that, oh yeah, this is what this work is about, and how this work sits into the broader world of technology advancements.

And that goes true for the things, that I guess, I’ve been recognized by; singled out as the person to exemplify those accomplishments. And all the work that was done on semiconductors, on transistors, when I was at IBM; many of the colleagues who contributed to the work. At the time, it was just an idea that maybe this is where the technology will go. There was no clarity, no idea that this will be THE technology that will eventually become implemented, become useful and so on. So, it goes through or many of the things that we do; it appears that the way technology evolves is completely unpredictable, and also affected by many external forces, external environment and so on. So the only way we could do it…is to actively shape that environment and to us, that the direction, that we think is the right way to go. And definitely that’s kind of what I’ve been spending a lot of my time on, lately.

You mentioned the semiconductor CHIPS ACT, and so on, which I believe is clearly a once in a lifetime opportunity for practitioners in our field, in semiconductor technology, to make an impact to the world. Because semiconductor technology is so foundational to everything that we do, from self driving cars, social media, … to personalized medicine and climate detecting and monitoring climate change.

And then making sure that we are energy sufficient and so on; economic security, national security; almost everything you would do, as you and I discussed, is foundational technology. And so we need to do this right! And being able to form alliances around the world to make sure that, not only we can continue on our present trajectory, but we also need to be able to ensure that we exceed our current technology improvement trend, because otherwise it would take us too long for us to achieve those goals.

And so it’s really time for the entire community to come together and think about how best to collaborate and form alliances, not only in the US, but also across the nations to make sure that these technology advances are facilitated and have low energy barrier, and also have a very vibrant industry for future students to go into.

Imagine, we were talking about; there has been a lot of talk about workforce training? Workforce training has two parts. One, you provide training. Two, you need to have the people who want to have that training. Because if you don’t have the training mechanism; nobody wants to take that training, it doesn’t achieve the purpose. How do you make sure that there are people want to get that training? It is for them to realize that this is a industry or a discipline in which they can make an impact. They could have a good career, they could realize their career goals, they could utilize the skills and make contribution to the industry, to the world, to broader society. So we really need to have a very vibrant industry for potential students that say; Hey, this is something I want to do, because after I get done with this training, there are many opportunities, and there are many career advancement opportunities. Whereas if you’re on the opposite side, if you say, after I graduate, there’s only two companies I can join. Not a very good start with a good career. So having a vibrant industry, is very important, and having collaboration, free flow of information is critically important. As I mentioned earlier, many of my research work involves collaboration across different nations.

(Philip gives examples of the RRAM model and the neuromorphic computing work collaborating with research institutes around the world such as India, US, Europe.)

So it’s a worldwide enterprise, and in terms of research, and we need everybody to contribute, and not just some regions to contribute. So going forward, I think, what we need to do as a industry and also as our community is to really band together and not just think for ourselves and what we can do for our own region, or own state or country or region, but rather, how can we create a vibrant industry that could actually realize some of these technological advances that we expect to receive that society would benefit from?

Stephen Ibaraki 1:12:55

Yes, ultimately, it’s really for the benefit of humanity. And I mentioned Earth ecosystems and your work is so foundational.

And then just as a bit of a reminder to the audience, it continues including, and we talked about this, your carbon electronics, 2D layered materials, wireless implantable biosensors, directed self assembly, device modeling, brain inspired computing, non-volatile memory, and monolithic 3D integration and more. And then all of this open source that you’ve been doing as well, because you’re trying to incentivize the industry, the world and students and even the CHIPS ACT, there is work workforce development in that, right?

So, you know, Philip, it was just a marvelous time I’ve had with you over these two interviews. I won’t ask you for recommendations on this because the whole interview is recommendations.

I just want to thank you for taking the time for over two hours to talk about what’s happening and how all of this is so important to the world, and its future. Thank you again for coming in.

Philip Wong 1:14:06

Thank you for the opportunity. I really appreciate it.

Sponsored

Leave a Reply

Your email address will not be published. Required fields are marked *