Sep. 10, 2020
This 30 minute panel from NVIDIA, Convergint Technologies & Chooch AI discusses advances in edge AI performance and deployment and is followed by Q&A. The implications for costs savings and quality improvements from edge AI are massive for every industry. Chooch AI performs rapid AI training in the cloud, then exports AI models to industry leading NVIDIA Jetson devices. The edge AI deployments achieve extreme response-time performance of 0.02 seconds on multiple video feeds with extremely high accuracy for a wide variety of use cases. As a systems integrator, Convergint is able to help clients design, deploy, and manage advanced AI Video solutions using Chooch and NVIDIA technology on a global scale.
The Edge AI Breakthroughs Panel was recorded 9AM Pacific on September 10, 2020.
Director of Product Management, Embedded AI Platforms, NVIDIA
Amit Goel is Director of Product Management for Embedded AI Platforms at NVIDIA – a platform computing company operating at the intersection of AI, graphics and HPC. NVIDIA supports AI startups like Chooch with training, technology, and go-to-market support through its acceleration program, NVIDIA Inception.
Business Development Manager, AI Video, Convergint Technologies
Convergint Technologies is a system integrator that helps clients deploy, manage, and support mission critical IoT cameras on a global scale, with over 5,000 colleagues, operations in over 70 different countries and annual revenue in excess of $1.5B
VP Strategy and Growth, Chooch AI
The Chooch AI platform delivers complete visual AI solutions, reducing costs and increasing accuracy for a wide range of processes and industries. Chooch AI is deployed on the edge and in the cloud, delivering fast, accurate computer vision for any spectrum of visual data.
Here is the full transcript.
Jeffrey Goldsmith, VP of Marketing, Chooch AI:
Okay, so welcome to the webinar. We’re going to talk about Edge AI breakthroughs this morning with Steve Washburn from Convergint Technologies, Michael Liou from Chooch, and Amit Goel from NVIDIA. I’m Jeff Goldsmith and I’d like to welcome you. It will go about 30 minutes, and then we’ll do Q&A. If you have questions, please ask them in the chat. That’s about it. So, I’m going to pass it over to Amit. He’s the Director of Product Management at NVIDIA, and we’re happy to have him. Thanks.
Amit Goel, Director of Product Management, Embedded AI Platforms, NVIDIA:
Thanks, Jeffrey, for the introduction and welcome, everyone, to this webinar today. Let me quickly share my presentation. All right, so we all are hearing a lot about AI, and it’s almost a foregone conclusion at this point that AI is the most powerful technology force of our time. And, it’s just amazing to see the journey of AI, how it started in the big data centers where it was big run on heavy compute systems running in data centers by some of the hyper skaters trying to identify recommendations for you, tagging your photos. And from then it percolated one level deeper into cloud. Now you can get access to AI using any of the cloud services, whether you want to do natural language processing, or you want to have your image detected, there are APIs available from the cloud.
And what has led to further mainstream adoption of AI is deployment of AI on the Edge, and this group near Edge systems are on the device, which would be robotics. And this is where the explosion of AI is happening. And the reason, again, as you can imagine, there are going to be trillions of Edge devices that’ll need the capabilities of AI. So for those of you who know NVIDIA maybe as a gaming company, we are at the heart of this whole revolution, and NVIDIA offers the leading AI platform that spans this across the entire journey of AI, starting from products for data centers for the cloud services, and Jetson and Drive products on the Edge. And what’s unique about NVIDIA products and the platform is it’s all based on one architecture, and so what that allows is really easy development and deployment and integration. And we hear a little bit more from our partners today along those lines.
So, let’s take a step back and think about what the challenges with AI at the Edge are. There are several challenges with AI, but today I want to focus a little bit on what is unique about putting things, putting AI on the Edge. The first thing that’s a big challenge is just the pace at which AI is evolving. The models that you developed a year ago quickly become stale because a newer model comes out, which have much better performance, much better accuracy, and much more capabilities. So you need a platform, you have to re-think about how you’re building your products and deploying them because you will have to keep changing them as you get new corner cases, as you get new model, as you get new data. The diversity and the complexity of the AI method that you’re using will keep evolving. So that requires a fundamental shift in the way you program things, deploy things.
The second this is when people think about AI, they start looking at performance numbers like teraFLOPS and what is the compute capability, and that was okay for training because when you’re doing an AI model training, your output, your final product is the model, right? But when you are deploying an AI at the Edge, you are essentially inference or the AI is a small piece of it. You need to take the data coming from somewhere. You need to process it, and then you need to use the AI to understand what’s in that data and then act on it, right? So, in getting the data in and then understanding it and then acting on it is a complete pipeline. So just your AI is not enough. You need to think about the full stack. You need to think about end to end, how am I going to solve my problem? Not, how am I going to model, but how I’m going to make sure that people are socially distancing. So, you have to think about the problem as a whole, and not just the AI piece of it. So that creates some challenges.
And the last thing is deployment and managing its skill. Again, I like running something in the data center or the cloud where you have an IT team, a dev ops team that is going to manage everything for you. When you are deploying it on the Edge, you are talking about millions of devices. There is no one IT person that can manage your millions of devices. So, you need to think about how can we leverage the same technologies that were developed for cloud and bring them to the Edge so that a small dev ops team or a small AI ops team can manage these deployments.
So, these are some of the unique challenges that people face when they’re deploying AI at the Edge, and to address these challenges, we created the Jetson platform at NVIDIA. And Jetson is essentially software defined AI computer for the Edge, and it is available in different form factors, and as you can realize, that is different level of requirements for AI depending on what you want it to do. If you have a small camera at an intersection that you want to just detect the traffic flow, you just need one smaller, maybe a Jetson nano or a TX2 to do that. But if you have a robot that’s going to be driving around looking in all directions, with lidar, cameras, understanding people and the behavior around it, then you need a lot more compute, and for that we have the Jetson AGX Xavier.
So these products are, again, start from the lowest level. We have the Jetson nano, which is the entry level, allows you to do a few sensors. We have TX2 series of Products, the Xavier annex and the Jetson AGX Xavier. And as you can see the range in price, the power and performance, and the unique thing that is important here is that they all run the same software architecture. So, if you’re thinking about a portfolio of products that you want to scale out from, if you’re doing a video analytics application, you want a one store solution, you want a big warehouse level solution, you don’t need to re-invent, you don’t need to re-write the software, you don’t need to choose between different platforms. You can leverage your investment that you’ve made into the platform and scale it up or down, depending on your application.
Now, what makes it all work is the software. That’s where the real magic is, right? It all starts with it has to have a really good compute platform, which we offer with Jetson products, but what really makes it, enables people to actually deploy it and use it is our software staff. So, what we provide with Jetson is not just the hardware, but a complete set of tools… Not sure why this is auto forwarding. Okay, hopefully it will stay here. Yeah, so what we provide with Jetson is a complete set of software stack, which is… I’m going to keep it like this, so it doesn’t move forward. And this software stack includes everything that you need to build a product, things from like operating system to things like production level fusing of your product, to libraries for accelerating video processing, accelerating your sensor input, accelerating how you do your AI.
And we don’t stop there. As I mentioned earlier, the problem of AI at the Edge required a full end to end stack, so we go one level higher and we also create SDKs which are focused on specific work details. And on Jetson today we have three separate SDKs. We have the Deep Stream SDK which is designed for video analytics application. It allows you to do really high-performance video processing. Then we have the Isaac SDK, which provides a robotics application development. And then the last piece, which brings it all together, is our ecosystem partners. And with all of this together, we see AI being deployed in cities, factories, logistics application, healthcare, and agriculture.
So, the third piece of our platform is our ecosystem. It is not possible to deploy the Edge without our ecosystem, and today we have partners Convergint and Two-Tier, so we have a lot of partner that provide ready solutions, out of the box, ODNs that will provide you boxes that you can deploy anywhere. And then we have partners like Chooch, who provide you the software that it needs to run on it to actually do the task that you want to do. And we have a global distribution channel. So, we have close to 90 plus ecosystem partners, and this is what really enables the fast go to market from the time you have an idea to actually deploy a product, you can cobble all these partners together, the solution from these partners together. They work seamlessly because we have an AI platform, and you can go to market.
And so, based on these three pillars, software, hardware and ecosystem, we are seeing a huge adoption of AI at the Edge, and it does not just one industry. It goes across several big industries, whether you’re looking at industrial application, whether you’re looking at smart cities, healthcare, agriculture, logistics, retail, all these places are now deploying AI at the Edge. And to talk a little bit more about how our products are being used, I would like to hand over now to Michael to talk, share his experience with using and deploying AI at the Edge. Michael, over to you.
Michael Liou, VP Strategy and Growth, Chooch AI:
Thanks so much. By way of background, I joined Chooch about five months ago as VP of strategy and growth. We’re a Silicon Valley based firm and venture backed, and we are a visual AI company. So, we focus on the visual aspect of AI. Our technology detects objects. It detects actions, like falling, slipping, fighting, as well as states as well. We have the unique ability to analyze still images, video, infrared, X-Ray, MRI, all types of geospatial data, both on the cloud as well as on the Edge. We’re a member of NVIDIA Inception program, that is an AI focused program with over 6,000 members. They help with go to market support and expertise, and we’re also a member of the NVIDIA Metropolis program and Clara Guardian healthcare ecosystem.
The field of visual AI is very, very broad. There are thousands and thousands of companies out there focused on certain use cases, but we feel there are three to four distinct differentiators that kind of set us apart from other AI companies. As you all know, in order to generate really good models, you really need good data, and the whole process of data set collection and generation can be quite tedious. We collect images. We collect video and imagine the process of actually annotating all the objects and video one frame at a time. That is a very painstaking and very, very long process. We have actually automated that process here at Chooch AI, and able to generate up to 1200 images per minute as part of the data generation process.
Our second advantage is that we develop models very, very quickly. A lot of AI companies out there often take weeks or months to kind of perfect their models to get accuracy at an acceptable level. We often deliver our base models within hours. Earlier this year NVIDIA came to us asking on behalf of an industrial Midwestern meat processing company, “Can you build an AI model that would be able to efficiently and identify efficient industrial hand washing?” And we ended up taking stock video and we returned that base model in about three hours or so.
Lastly, we are a horizontal AI platform, and what that means is that we are able to kind of serve needs across multiple verticals. So, as a client’s needs change, we can develop new models and deploy them out to the Edge very rapidly on their behalf. To date, we have now developed two type of models. We’ve developed customized models for our clients, and we developed live to date about 2400 for all different types of clients that we have. That includes clients in the healthcare field, oil and gas, retail, media, geospatial, safety and security. We have over a quarter million pre-trained classes right now and 120 base models, and we also have now created what we call pre-trained models and these pre-trained models or what we call public models are ones that do not require proprietary data from clients. That data can easily be scraped from the web, things like fire detection, things like mass detection, whether someone slips or falls, and, of course, all the different pre-trained classes that I mentioned earlier.
What you see on the screen here is actually the Jetson nano. Ironically, it’s about three times bigger than it actually is in real life. It’s actually about the size of the deck of cards or so, actually slightly smaller, and on this small device we’re able to fit up to eight models and 8,000 classes, which is quite an achievement, and I’ll go into a little bit more detail with that later on.
When you start talking about the problems with Edge, what we keep talking about is well, what does this really mean? And what can it actually kind of do for us? Well, we know that in the cloud it’s quite easy, right? All we need to do is build the models. We can provision additional GPUs and processing capability, and we’re able to kind of scale quite infinitely. But when you actually deploy to the Edge, you have some constraints, right? You’re talking about a smaller device with a fixed amount of processing power and memory, and how can we do this without encountering some of the constraints? And those constraints often come in the form where accuracy suffers, speed suffers, or a limit to number of models on these devices. And what we’ve done at Chooch AI is that we’ve been able to architect the models in the cloud and actually export them down to the Edge devices and are able to do that without suffering any accuracy degradation. We’re able to inference very, very quickly, actually around 20 milliseconds level or so in terms of optic detection, and we’re able to fit up to, as I mentioned earlier, about eight models or 8,000 classes.
And this has huge ramifications, right? If you think about being able to detect hard hats or vests in a warehouse or seeing whether someone slips or falls or whether a fire has broken out, you want to be able to make sure that you have enough models and compute power to effectively inference and predict them in a very, very short period of time, in this case 20 milliseconds.
The other advantages of going to the Edge are one, which is on the top of a lot of people’s minds is data privacy and protection. If you’re able to export models to the Edge and all the inferencing is done on the client premises, you don’t have to worry about going back to a third party cloud with sensitive data, let’s say like facial data or personal or PI data as well, too. Second, you’re able to inference on the Edge right on the client premises without any network latency or going up and down through the cloud stack as well, too. So, you’re going to be able to get immediate results, which I think is also very, very powerful as well. And then third and probably most importantly from a cost perspective, it might be okay to inference a video live into the cloud, and especially in small clips, but imagine thousands of video cameras streaming simultaneously for security purposes at corporations or monitoring a hard hat or safety conditions at millions of square feet of warehouses. This becomes computationally very, very intensive and extremely expensive. And so, this is the reason why you can buy a nano for $99 or a Xavier HGX for $899, and that’s a fixed cost where all of our compute can actually be done on the Edge without additional costs versus the cloud solution.
I want to go into a couple use cases here. I want to start with surgical Ais. This is actually a live deployment that we had with a Fortune 500 med device company, and we’re basically deploying three cameras and two NVIDIA Xavier GPUs into the operating room. The medical device company plans to roll Chooch AI out to 2,000 ORs across the U.S., and what the technology is doing is actually indexing and logging every single action that happens within the OR, ranging from when the doors open and closes to when the surgical team walks into the OR, to when the patient’s wheeled in, to when the drape goes up, anesthesia starts, anesthesia stops, to when the first body cavity incision is made, and then tracking all the scalpels, all the forceps, all the sponges, all the gauzes that go into the body cavity, out of the body cavity, doing needle counts and accounting for all of that, and making sure that nothing gets left behind after the procedures are completed.
This is really important, obviously, because this helps reduce a risk, readmissions, repeat surgeries and a lot of these procedures actually required an additional doctor on-site to actually monitor the procedure. The ROI with this particular technology, the medication device company feels that they’re going to save about a billion dollars. That’s a billion dollars in costs over ten years across these 2,000 ORs. So, this is a perfect example where our technology can identify objects, actions, as well as stakes.
I want to review a use case that we’ve been talking to some clients about, and this is in a retail space. There’s a lot of technology out there with business intelligence, calculating things like dwell time, how interested are people in a particular product, counting number of people in a store, where they’re kind of aggregating. But imagine on Monday when the Apple announces and releases the iPhone 12 in a store and imagine having a camera trained on the table there. We could, with our technology, actually track every single demo iPhone on that table, and we could track how many times it’s been picked up. We can calculate how long it’s been held on average. We can determine whether it’s the gold one or silver one. We can determine demographic.
So imagine compiling that data over five, ten, or thirty day period, and it turns out that the gold phone is picked up maybe 20% more of the time, held for maybe two minutes versus one minute for the silver, and of which 60% of the people who picked it up are women. So, what does that do with marketing campaigns for Apple? What does that do with inventory control? What does that do with production runs, right? So there’s a lot of uncaptured data here that would not be realized at a point of sale, and those of you who have actually ordered a phone, you usually don’t do it in the Apple store because they don’t have your color, they don’t have your size, they don’t have your carrier or configuration. So, this is another kind of customized use of our AI within the retail space.
I’d like to kind of close and loop back to trained models. As I mentioned earlier, trained models are ones that we can develop using commonly available data. We don’t need customized data like we do in a surgery AI example, and we’re starting to roll these out to customers right now. They’re deployable today. One use case is industrial safety. So, imagine every worker who checks into the warehouse, ensuring that he or she is wearing a hard hat, a vest, safety gloves. This hopefully will reduce workplace accidents and reduce workman’s comp, which would potentially then reduce workman comp insurance premiums or so. We also are doing a PPE compliance as well, too, determining whether people are wearing masks, whether they’re social distancing properly as well. And then we’re also doing facial biometric access control where people are using their face to potentially check in as well as board for flights on an opt-in basis or so.
I want to close and say that we’ve gotten a lot of traction with these pre-trained models, and particularly within the PPE space, and this is a perfect opportunity to kind of hand it off to Steve Washburn, my colleague at Convergint, to go into a little more detail about how we actually set up a live pilot very quickly. Thank you.
Steve Washburn, Business Development Manager, AI Video, Convergint Technologies:
Thank you, Michael. I appreciate the opportunity to share some of our experiences and talk about what we’ve been working on as part of this amazing ecosystem that NVIDIA’s been building. Before I talk too much about what that use case was and some of the real-life examples we’ve been working on, I did want to share just a little bit about Convergint Technologies. Convergint is a global system integrator with over 120 locations worldwide. We have about 5,000 colleagues and annual revenue of 1.5 billion dollars. This gives us a unique space in this industry and in this ecosystem to help our partners and to help our clients achieve the true vision and value of AI video solutions. Because we just deploy and support technology, we don’t manufacture any of it. So, we require and use the technology from NVIDIA and Chooch to help our clients solve very complex enterprise class IOT solutions. That’s what we deliver to them, and we really do pick the best product and solution for all these applications and bring our global reach down to the local needs of that specific application that they’re looking for solving. And being one of the world’s largest resellers of IP video cameras, one of the leading sensors that’s used to do this visual processing of data, it puts us in a great position to help our clients really achieve the benefit and value of AI.
We’re a U.S. based organization. We’re headquartered Schaumburg, Illinois, which is just outside of Chicago, and we’re part of the Aires Management Portfolio Group, a $66 billion dollar organization, so our size, scale, and depth of knowledge are critical in the ecosystem as both Amit and Michael have talked about moving this technology from a lab environment into production. And that’s what I’d really like to talk about right now. I know in the chat window there’ve been some questions about some use cases and some potential uses, and I’d like to focus on a return to work use case that we just worked on with the NVIDIA and Chooch team.
This was for one of our clients, a global based commercial airline that has hundreds of thousands of employees, travels all over the globe, and much like many organizations, they recently had to reduce the number of staff, the number of employees coming to their facilities due to COVID. And about 45 days ago they reached out and said, “Our executive team is looking for help, some ideas and some solutions as to how we can re-populate our facilities, how we can start to re-gain normal operations, and we need to do this very quickly because they’re trying to make some of these decisions fast.” So, they asked us to try to deploy some of this technology in less than a week, and there was no infrastructure available. There was no cloud compute, there was no network available. There were no cameras available. So, all of this had to be done in a very self-contained manner just simply because of the privacy concerns and some of the cyber security concerns that I’m sure all of you can imagine. And as you know, the global transportation industry has been really hit from a financial perspective, so there was a very limited budget to accomplish this.
It’s these types of scenarios that most people would walk or run away from because they’re so complicated, but we just love them, and it was a very fun experience for us to pull together the team of Chooch and NVIDIA to help our clients solve some of these real life business problems of identifying people that aren’t wearing a mask properly, that aren’t social distancing, ensuring and helping the leadership of this organization to make sure that their employees are following their policies, acting and behaving safely, and doing it in a manner that was very portable, very local, and included a lot of flexibility and payment options so that they could meet and fall with inside their budget.
I’m excited to say that after a lot of testing, a lot of hard work, and a very, very short period of time, we’ve been able to help that client understand the risks associated with re-opening, empower their employees to engage in more safe behavior, and capture all this information and data that’s being generated by the Edge compute of the NVIDIA Jetson product, along with the Chooch AI inferencing engine. And I’d like to share with you all some of the information that we’ve been able to gather and produce, because as a part of our solution set, we also have a data visualization platform that we’re able to consume all of this sensor data from all of these IOT devices and represent it in a manner that provides actionable information and intelligence for our clients.
The little graph that you see on the left of this slide is a snapshot of the live dashboard that we’re able to produce, and this is showing non-compliance over time. So, in the beginning you’ll notice that there’s a lot of non-compliance occurring, and that’s for a few reasons. One, the employees that are working at the location aren’t following company protocol properly and two, the model is learning. It’s starting to get better. It’s starting to understand what that environment is operating with, and we’re making improvements to the behavior of the employees and the performance of the model. And on that graph, as you look to the right of the graph, you’ll be able to see an improvement in the amount of non-compliance, meaning there’s less non-compliance over time.
And this is where things get really, really interesting and exciting, is we’re able to take this information that’s occurring in the wild as we like to call it, understand what that information is and improve it, and that’s the little image I have to the right of the slide, that little journal entry list. Because one of the things that’s also very powerful about the Chooch AI platform, the NVIDIA solution and the team that we put together is we’re able to indicate, a human is able to indicate if what’s happening in the machine that’s deployed on the Edge, is it actually real? Is it accurate? So, we’ve got a dashboard. We’ve got a live interface that a human’s able to walk into and indicate did the machine perform properly? And if the machine did not perform properly, the human can note that. Which that drives confidence in the overall performance of the system, but it also gives us information to re-train, to re-educate the model and to get better and better at the performance, which is why the graph to the left is so powerful. Because at the end of the day, we’re deploying these systems, these technologies to solve real life problems, right? It’s not just because technology’s cool. It’s not just because it’s breakthrough stuff that nobody has ever done, but it’s to produce real life results, and that’s what we find is very exciting.
And I think that’s really, in closing, why people want to work with us and why people ask us to help with these very complex and challenging environments. We’ve done this time and time again with a number of partners. This is just one example and Amit really covered it well. We really need what I call turnkey or whole solutions. We need to have partners that can provide the compute, the AI inferencing engine, but that can deploy this technology. We’ve got 5,000 colleagues all over the world. They climb on ladders every day. They know how to aim and focus and adjust a camera. We work with very complex enterprise class solutions in some of the most challenging environments in the world. And at the end of the day, we deliver results, and that’s what I think is the key part. As we’re looking and as you’re thinking about all of these various ecosystems, how can we all work together? How can we all derive and bring these video solutions forward, these advance video solutions forward? And that’s what I find is very interesting and very fun.
But I could talk about this stuff forever, but I think at this point we should turn it back over to Jeffrey and let Jeffrey start answering and open up the Q&A session, because I’ve seen a lot of questions come through. Jeffrey, I’ll stop sharing and turn it back over to you.
Edge AI Breakthroughs Q&A
Okay, so we’ve had a number of questions and I’m going to pose those to the panel, and then you guys can answer them ad hoc as you will. Let’s start with the last question. Can you speak to the scalability of the solution, and I’m going to mute myself so Michael can answer this as well. So, can we speak to the scalability of the solution, gentlemen?
As far as the scalability goes when it comes to Edge devices, we actually have a dashboard that actually manages all of our Edge devices, and so when we actually export models from the cloud down to the Edge devices, we can do thousands, literally, through our dashboard and deploy those models and model updates remotely. As a matter of fact, the example that Steve mentioned, we’re based out here on the west coast. Steve’s based in the Midwest, and the client actually was on the east coast and we did this all remotely. It was actually fairly straightforward.
We have, from the product side, compute side, as I showed you earlier, we have compute portfolio products that can do, scale out the solution from tens of cameras to thousands of cameras, depending on what you need. And the software stack as Michael mentioned is designed for scale, right? Using container-based deployment strategies so you can update these things, you can monitor these things is extremely important. So, we do see a lot of our customers deploying these at store level, at mall level where you’re doing hundreds of cameras. So, this solution is definitely being deployed at all levels of scale.
We have a similar experience where usually what happens is clients will like to test and evaluate the technology, so we start the projects out quite small typically. They usually start in a couple of locations where they’re evaluating, making sure that the business benefit is being achieved. And then once the business benefit is proven, it scales very quickly throughout the entire enterprise, and I think Michael’s example in the healthcare application where it’s potentially saving them billions of dollars is very common to the experiences that we’ve had as well. As one’s organizations start to understand, see it firsthand, and really believe in it, it starts to get adapted and deployed on a very large scale quickly. Jeffrey, back to you.
So, a couple more questions. I’ll just pose two at the same time and you guys can manage to answer both. But we have a question about, the question is can you speak to any smart city or additional industrial applications? So, ideas you have for smart cities or industrial applications? And also, someone’s asking a very specific technical question about how many camera feeds can a Jetson nano manage if it’s got two models running? So, let’s just address both of those questions at the same time as you will.
I’ll take the nano question. The nano is actually the first Jetson device that we actually implemented on earlier in 2020. Our preferred device right now is the higher end, $900 Xavier AGX. It’s actually much, much, much more powerful as you saw on Amit’s earlier screen. With the nano we can handle one stream right now, but the AGX we can handle at least five streams simultaneously. And we’re talking about streams that are RTSP compliant in terms of their protocol, 30 frames per second, 1080 p or higher.
And to add to what Michael said, we do see some kind of customers deploying a few streams. The question of number of streams depends on a lot of variables. What resolution do you want it to take? How accurate do you want your models to be? Are you okay with false positives and negatives? And I think Chooch has a variety of models that you can choose from, right? If you are doing an extremely important operation like a surgery room, you want to use your best models, right? You want to have the highest quality model. You want to process the highest resolution data, so you want to see the smallest level of detail, and that’s where you need a Xavier class product. If you’re just trying to see people coming and going out, you could use something that’s a smaller model, and you could run a greater number of streams on the Jetson nano. So, it completely depends on the application that you’re trying to address on the number of streams.
Second question was about industrial applications and smart city applications. There are a lot of smart city applications from things like smart lighting where people are putting these cameras and deciding when do they need to even turn on the light. If there’s nobody on the road, why do you want to spend that energy resource running the light? You want to optimize your traffic. You want to understand how to change the signals so that traffic can move more smoothly, or identify areas of possible accidents and how to change traffic flow so that there are fewer accidents. And the smart cities, there are, again, lots of applications like this. Now with opening back America, there are lots of cities need the ability to monitor if people are following social distancing or not, if they’re following the CDC guidelines or not. So that’s extending into smart city.
On the industrial side, we see applications in predictive maintenance, understanding what’s happening with the machine. Even before it fails, you want to be able to have a technician go there and repair it. Quality assurance is a big one. It’s extremely hard task, very manual process, and very tedious process for people to actually inspect small device’s parts and make sure that there’s no ding on your iPhone, there’s no scratch on it. So, you can use AI based quality assurance and defect detection for that in industrial application. And there are more. I could go on all day, but anything you want to add, Steve? Michael?
Yeah, I was going to say, really your imagination is the limit, and the breakthrough that’s occurring in the technology today is a couple of things and we’ve talked about it repeatedly. One, it’s the Edge based AI computing technology that NVIDIA’s bringing to the table. So, before all of this compute would be very expensive. It would be very large. Probably need a data center type of environment, which was not very practical, because video streams, as many of you know, are very large from a bandwidth perspective, consume a lot of, very hard to move around from a network. And then another breakthrough that’s been occurring and why we love working with the NVIDIA and Chooch team is really Chooch’s ability to have all these publicly pre-trained models at our disposal, but then to adjust those models on basically on an on-demand basis and quickly re-train a model or train a model to do something new and unique. So whatever you can think about, if your eye can see it and if it can be something that happens repetitively, meaning it happens more than once or twice, and it provides your business value, we are confident as a system integrator that we can take some of this Edge compute technology with some of this amazing AI technology from Chooch, deploy it in your environment, and produce some results.
And like Amit said, the sky’s the limit on what you think you can do and accomplish. It does take, I’m going to say a little bit of creativity, a little bit of willingness to experiment on things. Some things are straightforward like the mask detection scenario that we talked about. Some things people have done over and over again, but sometimes there are some unique things that people look for. Don’t be scared about trying those things because we’ve done those projects, we’ve had those successes, and it really kind of adds a lot of benefit and value to the organizations from my perspective.
So, a few more questions. There was a question about whether we have ruggedized our installs. And another question about, let’s see here, too many questions to manage all at the same time. Another question, how fast is the market growing for this?So have we ruggedized. How fast is the market growing? What do you think, guys?
Regarding the ruggedized solution, we actually do work with the U.S. government and they do require ruggedized solution, so we have a couple partners that we work with to help ruggedize for the various different harsh operating conditions or so. And I imagine that if you were going to stand up some of these technologies outdoors where there’s sun and rain, I think that’s also kind of important to know that that can be kind of weather-proofed or so. I think we talked to two or three providers. As far as installation goes, that’s probably more in Convergint’s court. We’re a software company at Chooch, and we kind of stop at the JSON response and maybe provide some basic analytics, but then in terms of additional analytics or installation last mile, that would fall into Convergint’s court.
Yeah, Michael. Just to kind of jump off on that, we’re seeing this technology deployed pretty much all over the world, so it could be an oil and gas rig, very harsh, very, I’ll say, inhospitable environment to even office environments. And one of the technologies that we understand or envision as an enabler is 5G technology, and I alluded that to a little bit on the use case I shared with. We’re able to deploy these Edge based processing technologies, load these inference Edges right at the very point of the video, and then transmit that video back in very small data packets and bursts, really anywhere. And that’s pretty amazing when you think about it because if you have a camera someplace and you add a 5G or a different type of cellular connection to it, you can really put that technology basically anywhere in the world, run the computer algorithm against it, the computer vision capability against it, provide the data that you need, and then visualize that data from the comfort of your desk. And that, to me, is what gets very exciting, and it’s going to spread that and increase the adaptation of the technology as it continues to gain more acceptance and knowledge in the industry in the world that this is possible.
A question about will Edge computing require any changes in their organization’s operating model? And another question about integrated data, cameras with NVIDIA inside? And we do know about, there are some partners that we’re working with that do have NVIDIA devices inside cameras, and changes to organizations. Do you guys want to address that? Things are going to change, obviously. Amit?
Yeah, so on the cameras with Jetson devices inside them, yeah, we are seeing a bunch of them now coming out with Jetson nano. In the past, the TX2 and the Xavier were too powerful for one camera, so nobody would put them. It was mostly bringing multiple feeds into a single system and running it. But with the nano, we had the right form factor, so there are several companies that are building cameras with the embedded Jetson product in there. So, you can run it, it’s all on the one box.
Talking about changes, as I mentioned, it requires, deploying the Edge, it has to do with not just necessarily Edge computing, but it has to do with the AI itself. Deploying AI requires some changes to the way companies operate. First thing, as you saw in the graph, the work matures. You have to be prepared that things will learn. You have to be agile. You need to be monitoring your initial data, and you need to keep updating it, right? So, you need to plan for that. This is not a one-time done thing and it’s over. If you are deploying AI on the Edge, and if you really want to maximize your investment, you need to be prepared to have a team that can monitor things, that can update things, and make your solution better over time. So, all this running all on the Edge, but you’re ultimately improving the value that the end customer is getting, so you may want to think about how that business model works. But apart from that, yeah, I think deploying computing on the Edge is actually enabling a lot of people to go beyond their traditional solution where they were limited to things that were connected or had a data center or a cloud connectivity.
Yeah, I would jump off of what Amit just said. I think what we’ve seen is a few things. One, these systems and these technologies, you do have to understand you’re going to need to support them. Support is not only on a hardware perspective, making sure the device is working, because once they start producing the benefit and value, they become so mission critical to what you’re doing, so you need to be prepared for that. But I think Amit hit it right on the head. You have to be prepared for learning and growing and adapting with it, because this is going to change a lot of the ways you think, a lot of the ways you execute and do things, and that’s going to change over time because we live in an ever-changing world. So, it’s really maybe not so much like a departmental change per se. More of just an awareness change in how you want to engage with technology and how you want to use technology, and really a flexibility. You know it’s going to learn. It’s going to adapt, and that’s going to force your organization to maybe act or behave a little bit differently, so change management I would say becomes a really important piece as well.
So, we’re almost coming up on the hour. We have about eight minutes left. So, there’s a question about AI training, about whether we can, let me look at Fred’s question again. Yes, exactly, if you guys can see in the Q&A. It’s quite a good question. Seems like a video feed data stream is high signal noise ratio. The question is about industrial tool preventative maintenance. We’ve actually addressed this with a customer. How would you guys go about, we can start with Michael, beginning this process of we have an industrial tool preventative maintenance problem and we have a video stream and we want to train for that. How would we solve this problem as an ecosystem? Was that a clear?
Yeah, that’s fine. So, going back to a point I made earlier about data set collection and generation. As I said earlier, we take in both images as well as video, and we’ve developed a technology to actually take the video, ingest it, and actually track the particular objects or anomalies of interest. So, in this particular use case if there was an anomaly that you wanted to be able to show and look at from different angles through video in all different states, we would need to basically take that video, draw the bounding boxes, identify the anomalies that we’re interested in and run the video through and generate all those training images from that.
Now, our data’s only going to be as good as the quality of the video feed itself. So, if it’s poorly lit, if it’s a little bit blurry, that data’s not going to be as useful or helpful. If the resolution is clearer, if the lighting conditions are better and we can actually get really, really good images there, then the quality of our data set’s going to be much higher and we’ll be able to much more efficiently train for that.
And we can do that for anything, so imagine if you’re trying to identify a car and you want to make sure it’s a particular brand of car. You would need to be able to figure out every single angle, every single color, every single lighting condition, and that’s thousands and thousands of images, but once they’re all uploaded into the data set and then sent to the model for training, we pretty much have then taken care of every single use case. So often they say that good models are dependent upon huge, vast amounts of good data, and that’s this case here. Video does supply a lot of that data, and we’re able to kind of really shorten the length of time it takes to actually extract the data from videos and hence develop the models more quickly.
Okay, I think this is going to be the last question because we’re really out of time now. A question about collective approach to cyber security, malware with regards to software updates. So, we have an ecosystem where Chooch is able to update models on remote devices from our dashboard, and so how do we prevent cyber security issues, malware to any of these software updates? Guys, do you have any answers to that? I’m just the marketing guy, so I’m not going to go there, but it’s a legit question, cyber security around our deployments.
I’ll take a high-level marketing type of approach to it. It is a very important piece. Honestly, some of the things that we see frequently. One, we do see the question about video quality we have to address, which Michael did a nice job of responding to. One of the other questions we typically run into is privacy, which we’ve talked just briefly on, which we could have a whole another discussion on privacy and the ethical use of AI. And then probably one of the third biggest questions we run into is this cyber security question. And what that revolves around from our perspective is really having the appropriate compute solution. We do full boot level encryption, full encryption on transport of the data streams that might be occurring, and then we also include some pretty robust cyber security management and other tools that all work together.
I’m not going to do the technology answer justice, but it is kind of one of the three key check the box questions that we need to work on, that we need to address, and it usually becomes a highly collaborative process because these technologies are being deployed in a customer’s environment, so that customer usually has some expectations around what they feel are the appropriate ways to be executing and delivering this. So, it becomes a shared security responsibility approach where we take best practices on the deployment, supportive technology, interfacing that with inside the organization’s typical cyber security expectations and meeting those requirements. It’s very doable, very achievable, but it is something that we have to think about and make sure we execute and implement properly.
And yeah, I can add a little bit of what we do on the platform side. Given that these products that we are building are designed for critical applications, we take security very seriously. Within our platform there are lots of things that we do in our hardware and software stack as Steve was saying. We provide all the tools that you want to do for secure boot to make sure that the route of trust and the trust chain is maintained. Also, on the hardware level, there is a lot of things that are built into, if you look at the latest generation of Jetson product, we are always increasing more and more security features built into the hardware so that even if somebody, because these devices are already open. People can get physical access to it. We are thinking about what people with bad intentions can do with it. In risk detection, there’s a lot of monitors on the hardware itself that can detect any of these activities and also on the software stack, we are providing all the tools that we need, whether it’s fuses, whether it’s encryption, we have dedicated hardware so that you can encrypt everything without sacrificing your performance. So, all these things are available on the platform to make sure that you’re deploying this in a secure and efficient way.
Okay, everyone. Well, thank you. There are still a few more open questions, but we’re at the top of the hour. I don’t want to keep everyone beyond 10:00 am Pacific time. If you have additional questions you can contact any of us. Amit Goel is at NVIDIA, of course. Michael and I are at Chooch AI. Please use the contact form on Chooch AI and we’ll get right back to you with answers to any of these additional questions. And of course, Steve Washburn is at Convergint Technologies. It’s been a great turnout, and we appreciate the panelists. Thank you, Michael. Thank you, Amit. Thank you, Steve. And we will speak again soon. Thank you very much, everyone.
For answers to additional questions, please contact us via Edge AI
To start using Chooch, sign up for the free AI platform
And to experience the power of AI in the palm of your hand, install our AI Demo