The University of Illinois Urbana-Champaign | High Performance Computing and AI Podcast

Media Thumbnail
00:00
00:00
1x
  • 0.5
  • 1
  • 1.25
  • 1.5
  • 1.75
  • 2
This is a podcast episode titled, The University of Illinois Urbana-Champaign | High Performance Computing and AI Podcast. The summary for this episode is: <p>Dr. Vlad Kindratenko and Eliu Huerta explain how the Center for Artificial Intelligence Innovation (CAII) at the University of Illinois Urbana-Champaign are using an IBM Power 9 cluster to research and deliver astounding deep learning solutions for their community campus and industry partners. From astro-physics to gravitational waves and neural networks, their high performance computing center has offered break-through solution for both faculty and students alike.</p><p><br></p><p><strong>Key Takeaways:</strong></p><ul><li>[00:05&nbsp;-&nbsp;00:20] Intro to the episode</li><li>[00:30&nbsp;-&nbsp;01:13] Intro to Vlad Kindratenko</li><li>[01:16&nbsp;-&nbsp;02:20] Intro to Eliu Huerta</li><li>[02:38&nbsp;-&nbsp;07:49] What is the NCSA(National Center for Supercomputing) and how does it relate to the Center for Artificial Intelligence and Innovation?</li><li>[08:29&nbsp;-&nbsp;13:36] The gravitational wave: Dr. Eliu Huerta's approach to solving these problems</li><li>[14:19&nbsp;-&nbsp;16:27] How computational fluid dynamic simulations can be improved with the use of AI inspired tools</li><li>[18:27&nbsp;-&nbsp;27:24] The supercomputer program "Hal"</li><li>[28:28&nbsp;-&nbsp;32:10] The future for Hal and the AI Center of Excellence</li><li>[32:17&nbsp;-&nbsp;34:52] The importance of making this technology available to students</li></ul>
Intro to the episode
00:14 MIN
Intro to Vlad Kindratenko
00:42 MIN
Intro to Eliu Huerta
01:03 MIN
What is the NCSA(National Center for Supercomputing) and how does it relate to the Center for Artificial Intelligence and Innovation?
05:10 MIN
The gravitational wave: Dr. Eliu Huerta's approach to solving these problems
05:07 MIN
How computational fluid dynamic simulations can be improved with the use of AI inspired tools
02:07 MIN
The supercomputer program "Hal"
08:57 MIN
The future for Hal and the AI Center of Excellence
03:42 MIN
The importance of making this technology available to students
02:35 MIN

Luke Schantz: Hello and welcome to IBM Developer. I'm your host, Luke Schantz. In this episode, I'm pleased to bring you a conversation with Dr. Vlad Kindratenko and Dr. Eliu Huerta, from the Center for Artificial Intelligence Innovation at the University of Illinois, Urbana- Champaign. So, let's start our conversation off with some brief introductions so our listeners have a little bit of understanding of your respective backgrounds and expertise. Vlad, why don't you go first?

Dr. Vlad Kindratenko: Yeah, so I'm a senior research scientist at the National Center for Supercomputing Applications at the University of Illinois, where I lead Innovative Systems lab and also co- lead the Center for AI Innovation that just recently started. And so my role is to look at new technologies and try to understand how these new technologies are applicable to current and CSA needs and to inaudible CSA needs, and try to bring these technologies for our users. I'm also faculty in their Computer Engineering Department where I teach computer engineering courses, and also a research faculty in their Computer Science Department where I also work with other faculties on AI- related research topics.

Luke Schantz: Thank you, Vlad. And Dr. Eliu?

Dr. Eliu Huerta: I am also colleague in the Center for AI Innovation with Vlad. Before starting these projects of getting together expertise across campus to do AI, I was also leading the Gravity Group, which is a group of researchers that focuses on applications of physics and astronomy and cosmology, and it just happens that research in these areas is quite topical in AI. We started some of the other applications for big data in physics, and it just naturally led to the creation of this center. As Vlad says, he's been working for some time on innovative applications of hardware and software for artificial intelligence, and it just happens that we cross paths at the National Center for Supercomputing Applications, and it was just a natural connection, how the expertise that he has in his team could be combined with applications in big data research.

Luke Schantz: That's so interesting and that's a perfect lead in to what I was going to ask next is, so again, for our listeners who may not be familiar with the National Center for Supercomputing Applications, what is the NCSA and how does it relate to the Center for Artificial Intelligence and Innovation?

Dr. Vlad Kindratenko: Yeah, so NCSA was formed in mid 80s, or in the second half of 80s, in the response of having a shortage or not having access to high performance computing resources by the US academic research community. It was formed by Larry Smarr, by writing a proposal to the national unsolicited proposal to the National Science Fund Foundation to form a center like that. And proposal was so successful that, in fact, four different centers emerged from it, and NCSA was one of them. And so ever since, since mid 80s or so, NCSA has been providing high performance computing cycles to the national computational science community, as well as all the services which are needed to utilize them. But actually, very quickly, NCSA became much more than that. It became a place where, not only services are provided, but also where science is done, and there is a lot of creative people come together to come up with an innovative applications of high performance computing, ranging from your traditional hardcore science domains, all the way to industrial applications and applications that inaudible the future with high performance computing. And so, for the past many years, NCSA has been running largest NSF sponsor, their NSF- funded high performance computing resources in the country. The Blue Water supercomputer, for example, and before that, series of other high performance computing systems. And hopefully, in the future, NCSA will continue to provide the same type of resources to the national computational science community, as well as university researchers as well. Now, Eliu, maybe you can say something about the connection to the Center for AI Innovation.

Dr. Eliu Huerta: Sure. So also, going back to the time that Vlad was referring to when Larry Smarr decided to have supercomputing research in the US, one of the main motivators for this was that he was also a physicist and he was interested in solving Einstein's equations. And to do that, since these equations are very complex, he needed supercomputers, and so he had to go to Germany every summer to get access to these supercomputer resources. And it just happens that, at the time, Larry Smarr recruited Ed Seidel, who was also a director of NCSA since 2014, I think, to 2017. And when he returned to Illinois, it just happens that he recruited me to NCSA to create again a group that would be doing research on this front, which is numerical relativity, gravitational wave astrophysics. And it was perfect timing because it was at a time when detectors in The States were capable finally to see gravitational waves. And so, NCSA was part of that story at the level of designing the community softwares that were used to understand these sources. And it just happens that naturally things evolve, and at NCSA, we are not only content with being leaders with topics that are well established, like high performance computing, or what Vlad is doing, being at the leading edge of new technologies for hardware. What we are also trying to do is to find ways to disrupt what is currently in the market or, if you want, in academia. And one of the things that were very topical at the time had to do with how we harness the data revolution or big data research. And I remember having conversations with Vlad about this and he would be telling us, " Well, there is a major difference now between traditional machine learning and deep learning or artificial intelligence." And he had a lab that he was running with several GPUs. And at the time, when we were doing research for Ligo, we realized that some of the tools that he was using, some of the hardware that he had at his disposal, would be great for us to do research. And so with my team, we started thinking about how to innovate on this front. And in 2016, late 2016, when Ligo finally announced the discovery of gravitational waves, we decided to go and apply AI to try to do gravitational wave data analysis, which is something that requires low latency analysis of data. It is not a lot of data but it is very fast, they call it, " High speed data." And so, we realized that AI could make a major difference in the way we are doing this type of analysis. And it was a great combination of these certain needs that we are doing. We are the forefront of research, but it just happens that NCSA is a very unique place. It is not your typical astronomy or computer science or physics departments, because you have people from all different areas working in the same building and they tend to interact a lot. And it was great that Vlad was in the building, that we got to talk to him, and then we became aware of these new technologies, and then everything happens after that.

Luke Schantz: That is really fascinating. It sounds almost, it's very cinematic or something, bringing all of these different interests and expertise together. So now let's maybe dig into, you had already hinted at some of the expertise and some of the things that are going on in the building. We got astrophysics, we got agriculture, there's all kinds of industry relationships. So maybe, just because I find it so fascinating, let's start with the astrophysics. Eliu, you mentioned the gravitational wave. So could you run us through what does that look like? How is this sausage made in the sense of, you have these hard problems, you have a lot of information. How do you approach, how do you use this technology to solve these problems?

Dr. Eliu Huerta: Okay, so let me explain to you the traditional approach how this is done. So with numerical relativity, which is solving Einstein's equations, you solve these complex equations and then you get some information out of them, which is time series, you can imagine music. You get some time series that tells you what is the signature of the collision of two black holes, for example, or two neutron stars. Then you solve these equations multiple times for different scenarios. Maybe the two black holes have equal masses, maybe they are rotating, or maybe one of the black holes is heavier than the other. And so, you get a catalog of all these different signatures or songs, if you want. And then, traditionally, what people do is they create a model so that it can produce very rapidly all these different signatures. Now, you are no longer using numerical relativity because it takes several weeks to just complete a simulation of two black holes colliding. So you now have a big catalog of different signatures. And so, traditionally, what people would do in Ligo is, you get the data from the detector and then you try to match the data with all these catalogs. So if, for example, you assume that you have about half a million of these model signatures, then you would have to do template matching between a second of data, for example, and all these different 500,000 signals. As you can imagine, that is super time consuming, and you require dedicated computational resources to do this analysis in real time. Because it just happens that if you have two neutron stars and they collide, they also meet light, and so you want to tell your friends that are managing a telescope, your astronomy colleagues, that they need to go and look at a certain spot in the sky because they are going to see light in a few seconds. So all of this has to happen fast. And so, what we thought was the following. What if, instead of having these very complex analysis, you go and learn all these features, these model wave forms or songs, in a hierarchical and what algorithm can do that? Well, neural networks. There you have AI. And so, what we did was to take one of these big catalogs that has millions of model signals, and then we applied a lot of very novel techniques that were not available in the literature. So we started, for example, to develop curriculum learning that was common in image analysis, but we now did this for time series. So you start with songs that are very loud, and then you gradually reduce the loudness until you cannot hear them. Or you place them anywhere in the data stream because gravitational waves can be anywhere and you cannot predict when you're going to find them. So after incorporating these new ideas, we ended up with a neural network that is just a few megabytes in size. So you no longer need a super computer to go and find gravitational waves, you can use your phone. And so you can go and place this app or executable, and then go analyze Ligo data in real time. You no longer need these dedicated resources. And it was revolutionary. People thought at first, " This is not possible. This is a joke. We have been doing this for decades, now you come and tell us that with this tiny executable you can do the same thing? I don't believe you." And it was a sociological issue at the beginning, but it just happens that after now three years, everybody around the planet is trying to do this. And when I am invited to go and review proposals from people in Asia or in Europe, they are talking about this as, " This is great, everybody should be doing this now." But all of this started in NSCA. It was a very beautiful idea, and we started with gravitational waves, but now we are doing this for a great portfolio of other applications, for example, cosmology. We are also trying to combine what we do at NSCA, like scientific visualization, to understand how neural nets abstract knowledge and then make predictions, so that we can trust in these algorithms when we use them for other more difficult making decision processes like, for example, cybersecurity or applications to medicine. So everything has evolved very rapidly from things related to time series analysis, for example, industry applications to big missions that we have at NCSA like electromagnetic surveys, to other applications like, Vlad can tell you about this, how we study turbulence. So we have developed very rapidly a very diverse portfolio of applications, and one of the big things that we are doing as well is how we go about doing AI at a scale. I think we'll talk about that in a few minutes.

Luke Schantz: That's so fascinating and it's really enlightening to hear a story about how these rare high performance computing supercomputing solution can be used to then create software tool that then lowers the bar and makes the technology and the process so much more accessible to everybody. That's a fascinating story to hear and I think it really goes to show why it's important for us to invest in things that seem like abstract science or it's like, " What does this have to do with what's going on?" But so much comes out of it. He had mentioned the turbulence notion. Could you tell us about that?

Dr. Vlad Kindratenko: Yeah. So we have a project in the Center for AI Innovation where we are looking at how computational fluid dynamic simulations, which are very complex simulations, very time consuming simulations, can be improved with the use of AI inspired tools. And there are actually several approaches to this and several useful techniques that AI can bring to this. Inaudible computations are very time consuming and there is a lot of them that need to be run to design any sort of a modern piece of machinery. Particularly, these are things like aircraft, where you have to understand the dynamics of the entire body. And so, AI tools can be very instrumental in improving performance of this because what becomes possible now is that some of the parts of the computations can be replaced with just AI predictions, or models that can be trained on some experimental data or some human simulation data. And then, instead of recomputing these results time and time, we can simply predict them using these neural networks. Another interesting development is that we can now also build neural networks where physics actually becomes an integral part of the process, behind driving what the neural network is computing. So we can build this physics inspired neural network models that actually embed some of the features of physics that governs the process, now can be used to generate the results much faster. And then the other use of deep neural networks or in the CFD simulations is that suddenly now, it becomes possible to solve problems which have not been solvable before.'Cause we can now run much larger number of simulations, we can also run much larger simulations, and we can also do an analysis of the entire space or the solution space, to try to find specific solutions to specific problems. And so, this actually turned out to be a very interesting development that is of great interest to industrial partners at NCSA, that many of them are using CFD simulations as a vehicle to develop new machinery, new tools and new products.

Luke Schantz: I'm curious to hear how you work together with industry. What does a relationship like that look like?

Dr. Vlad Kindratenko: In early days at NCSA, it was recognized that having access to a supercomputer also enables something that industry could benefit greatly from. And NCSA Industrial Partners Program was established in the early days at NCSA, in 90s. And this Industrial Partners Program continues to exist. It's called in NCSA, " Industries", these days. The purpose of this program is to basically provide access to the latest and greatest technology and solutions that have been developed at NCSA to our industrial partners, and make them been able to make use of this technology for different purposes. In fact, I actually came to NCSA as a post- doc working on the project sponsored by Caterpillar, to build a virtual reality that, at that time in the mid 90s, actually required a supercomputer to render the 3D scenery. And this technology was eventually picked up by Caterpillar and they build virtual reality setups in their own facilities and continue to use it. So at that time, it was a revolutionary technologies that gave an advantage to companies who had access to it. And so today, we see exactly the same developments with AI. Companies who can figure out how to make use of AI capabilities, they have essentially a technological advantage over other places, other companies, because they can solve their corporate problems much faster and get to their products much faster and deliver new products and services to the community. And so, NCSA Industry Program has been instrumental in enabling these companies to take advantage of the latest developments.

Luke Schantz: I wanted to ask you specifically about the, amusingly named, " Hal" supercomputer program that you have there and how it uses the power nine cluster.

Dr. Vlad Kindratenko: Yeah. So some number of years ago, three years ago, so we actually look at the research community on campus and asked this question, " What do you use today to run your deep learning models?" And amazingly, the answer was that majority of the researchers have a small GPU system somewhere there under their desk and just use that. And with that, they were not able to achieve all the potential of deep learning. They were limited in the amount of computations they can do, in the size of the models they can run, and so on. And so, we asked this question, " So what would be useful resource such that all the faculty on campus can make use of, and can advance their state of the art in their research fields using deep learning?" And so we came up with this idea to build a computer system that would be shared resource for everybody on campus and can help them. And there was actually a program at the National Science Foundation called, " MRI," Major Research Instrumentation Program that was designed to fund building research instruments that can be shared resources for campus communities, so we applied for this research instrument grant. In order to apply for it, we actually had to come up with an idea how our system will look like. And so we've done some research, we look at what sort of technology is out there, and especially what sort of technology is going to be available, say in half a year or a year from that point, because it takes some time to get funded. And we realize that the upcoming technology that is going to bring the next level of advancement is going to be IBM power nine system, and this is actually the same technology that's been developed by IBM for this other supercomputing centers in the domain. And so, we decided that we will be building our cluster using that technology, because that was state of the art, something that was providing capabilities that no other technology would provide capabilities. So we worked with IBM to come up with the proposal using the system that could provide these capabilities, and submitted this proposal to the National Science Foundation, and we actually got funded. And so, we built this system that became operational about a year and a half ago, called, " Hal," which stands for Hardware Accelerated Learning Cluster, but short, " Hal." This is actually a 16 node IBM power nine cluster that embeds the latest IBM microprocessor technology, latest GPU technology, or it was the latest up until this summer when inaudible announced the next generation of the GPUs, and also latest interconnect technology. And so, this cluster was built to support a variety of deep learning applications, and so for that, we use IBM. A software that we stack developed specifically to provide support for the standard flow and inaudible frameworks, including distributed deep learning, so with the ability to run this learning across multiple GPU nodes. And so now, this cluster became the centerpiece of many of the research activities on campus that utilize deep learning as their technology of choice. We have well over 300 users on the system. These users are running jobs constantly, using anywhere from one GPU to many, many nodes of GPUs. We've done our own benchmarks where we can train ResNet50 on ImageNet in just under an hour on this cluster, something that used to take, in the early days when this network and this data set were just developed, it would take many, many days to train it. On this system, you can train it in a very short period of time. And so now, we support actually a very large number of users on this system that actually go and do their science,'cause they are no longer concerned with availability of the resources or with the availability with software or the ability to run something with large data sets. Their only concern is about how to advance data science, and this cluster provides the platform for that. Maybe, Eliu, you can say something about the use of this.

Dr. Eliu Huerta: Yes. So there are three examples. One is related to electromagnetic surveys and this is, for example, people that have telescopes and they want to study galaxies. And NCSA led one of the most ambitious projects on this that is called, " The Dark Energy Survey." So they were able to capture about 300 million galaxies. And galaxies have different shapes, and they tell you a lot about how they form. If you look at an elliptical galaxy, you know that it is an old galaxy. If you look at something that looks like a spiral, that's a young galaxy where stars are being formed. And so it just happens that astronomers want to see what they are looking at, they want to understand it. And what they do is, they tend to classify them according to their morphology, elliptical, spiral. Now, the problem with this survey is that 300 million galaxies, you're not going to use your students to classify them by hand. That's going to be super painful and it will take a lot of time. And so, we use Hal to develop a method that is automated, which is now using neural networks to do this type of analysis. And the beauty of this is that we combine information that we obtain from Citizen Science Campaign, the Galaxy Zoo Project, so we train neural networks with this information from a different survey. And now, with Hal, we train a neural net in just 2. 1 minutes using the entire cluster. Usually, if you only use one Volta GPU, it would take about two, two and a half hours, but with the entire system, you can finish that in minutes. Now, what is the beauty of that? Well, the beauty of that is that by training these models so fast, we were able to understand how the neural net was classifying the galaxies. And it just happens that this way of solving the problem was so appealing that the Department of Energy, that funded us to do this research, featured this as a highlight. And not only that, we were also selected to go to the Visualization Challenge in Supercomputing 19, and this visualization was selected as one of the six semifinalists there. And just to put the cherry on top of the cake, this was also featured in the, " I am AI" GTC keynotes in May. So Hal was at the core of all these developments. We had the computational resources, the ability to do this type of work because we go and talk with Vlad and we tell him, " We need to use the cluster for a couple of hours on a given day. How do we go about this? How do we make sure that we are using the latest technology from IBM, from Nvidia, so that we can produce top class research? So that is one example. Another one has to do with how we use Hal, that has 64 GPUs, to try to run some of the grand challenges in supercomputers like Summit, which is a larger scale version of Hal. And so, we used Hal to train a neural net that is able to measure how fast black holes are spinning. Now, this is a very complex problem. We need millions of these model signals, as I explained before, to try to understand the parameter space. Now, it just happens that these neural nets is so complex that using just one GPU would take us about a month to train it. With Hal, we reduce the time to insight to just 12 hours, so that is amazing. But then, we asked the question, " What if we use the knowledge that we have from Hal, which is related to all these different hyper parameters or knobs that you tune to train it, and we going do this in Summit?" And so, with the parameters that we identified in Hal, we went ahead and tried to train this same model in Summit, using over 6, 000 GPUs. Minimal changes, almost all the hybrid parameters worked off the shelf, and in just one hour, we had the same model, same accuracy, full convergence, et cetera. So Hal, in this way, is serving as the bridge to go into bigger platforms. And this is not trivial. People have developed software stacks to try to do this, identifying the hyper parameters, but it just happens that the configuration in Hal is so good that whatever information you get out of the system, you can go and almost transplant this into larger scale systems. I haven't heard of any other application that has this type of capability, so it is great.

Luke Schantz: That's really interesting and it reminds me, it's a similar pattern to what you were mentioning before, how, from the systems you have there, you've been able to, now you can do processing on an iPhone and find gravitational waves and the same thing.'Cause I'm imagining getting time on a system like Summit is probably difficult to do and there's a lot of scheduling. So if you can figure out your system on something you have, and then when you get time there, now you can be really effective, versus trying to hash it out and figure out what you're doing then.

Dr. Eliu Huerta: That's right. And it just happens that, to get time in Summit, we had to come up with preliminary results. And what we use as preliminary results is the runs that we had using the entire cluster in NCSA. So that opened the door as well, to get access to those resources.

Speaker 4: What do you see in the future for Hal and for the AI Center of Excellence that you can't do today, that you wish you could do with those assets to support your industry partners?

Dr. Vlad Kindratenko: So with regards to Hal, the system is big but not big enough. So you would really like to see the system to grow and have more and more computing power in it, so we can try to solve much larger problems than we can solve today. So that's one dimension in which we are interested. Another dimension, we are also interested in using more interesting architectural modifications to the system. In particular, things like field- programmable gate array, FPGAs, that can enable a new breed of applications. Particularly for inference applications where you have the trained model but now, you just have to run this new model against very large data centers that you have to classify or sort somehow or filter somehow, so we are working on that as well. So we have a project where we are building bits and pieces of the inference pipeline that will enable us to run complex neural network on FPGAs, and then link it together with the traditional frameworks such as PyTorch, for example, to enable others to seamlessly use the FPGAs underneath. So this is another direction. I think the development of user interfaces and ability to make an easy use of this technology for people who are not computer scientists by trade. So for that, we've been developing these interfaces, you can just connect to the whole cluster to the web browser, and you can launch a Jupyter Notebook because this is, for example, a typical traditional approach how a lot of people are starting to develop models at first. So now, we have this technology in place where you don't need to know anything about file systems and the secure inaudible logging into the system, you just use web browser and you have your traditional notebook. We also are looking at other tools that enable the use existing models. So we actually have a couple of tools on how, which are currently supported, but we are looking at enabling more of these tools. One of them is inaudible, which is essentially a collection of machine learning tools, including some of the neural networks, that enables to simply provide model parameters to the web browser and enable training or enable that model to perform some computations or some dataset. Another one is IBM Visual Insights, which is also web browser based software package for running complex deep learning models. And so, IBM Visual Insights in particular is interesting because there are very few models developed in it, like a handful of models, but those are models which are frequently used for whole classes of problems. And so now, it's possible for somebody who has no training whatsoever in computer science, but understands how deep learning works in general, but now it becomes possible for him just to load his datasets into this tool and with a few clicks of mouse to start training on the dataset and get some results. Just recently, we done an interesting study where we took one of the data sets which are available online with the Covid chest x- ray dataset, and we were able to put this dataset, upload this dataset into this Power IBM insights tool, and we were able to train the model very quickly. And that model actually was pretty selective versus non- Covid X- ray images. So we are looking forward to having more tools of this nature, and then enhancing existing tools that will enable users of Hal to use the system without actually knowing much about underlying hardware or underlying technologies that runs there, just the main specialists should be able to use it to solve their problems. So this is another big direction in which we are looking for in supporting Hal.

Luke Schantz: So let me ask you this, is there anything I didn't ask you that I should have or is there any closing thoughts?

Dr. Eliu Huerta: One other thing that we have learned, and I'm sure Vlad can tell you many stories about this, is we will only know where to go, where the challenges are, when you make this technology accessible to students. And I'm talking about students in particular because it just happens that some of the major disruptive developments from NCSA, came from students. Like Mosaic, the work that we have been doing with AI, it is because we provide access to these resources to students that are very bold, they are eager to go and try new things, and they are the ones who tell you, " The system is great, but we really need the following to go and address this problem." And so, it is difficult to predict where the technology is going to be a few years from now, but we can only say that it will continue to move in the right direction when these innovative minds get access to the resources. And so, using Hal as one of the pillars for the training program that is imparted by the center, is essential. And that is why we emphasize these specific aspects. One of the first of the center is to go and democratize access to deep learning training, and this is what Vlad and I have been doing over the last year, providing access to all sorts of students to these resources. And they show you, just access to these resources, for example, through hackathons in a couple of days, they come up with some really innovative solutions to big problems that... Faculty members who suggested problems, they think this is going to be impossible to solve in two days, and then suddenly, this students come up with some really great solutions. So I think this is the way forward. Continue to provide access to students, give them an idea about what they can go and try with these technologies.

Luke Schantz: That's a really heartening message and I love that it's based on things that you could get ahold of today. If you learn some basic Python and Jupyter Notebooks, now you're in a position that when you get to college or you have access, you could actually make use of these sophisticated systems.

Dr. Vlad Kindratenko: Yeah, that's right. And so, we've been actually providing training sessions for students to learn how to use the system, how to get onto the system, how to use basic tools, also how to build basic models and train these models. And as Eliu was saying, this is actually quite refreshing to see the students come up with such unique and such interesting and powerful solutions using this technology, once they realize how much power they have in their hands.

Luke Schantz: Well, I hope you enjoyed our conversation with Vlad and Eliu. Please take the time to like and subscribe, and we'll see you again soon on another episode of IBM Developer Podcast.

DESCRIPTION

Dr. Vlad Kindratenko and Eliu Huerta explain how the Center for Artificial Intelligence Innovation (CAII) at the University of Illinois Urbana-Champaign are using an IBM Power 9 cluster to research and deliver astounding deep learning solutions for their community campus and industry partners. From astro-physics to gravitational waves and neural networks, their high performance computing center has offered break-through solution for both faculty and students alike.

Today's Guests

Guest Thumbnail

Dr. Volodymyr Kindratenko

|Assistant Director, National Center for Supercomputing Applications (NCSA) at the University of Illinois at Urbana-Champaign
Guest Thumbnail

Dr. Eliu Huerta

|Lead for Translational Artificial Intelligence