The algorithm used for running our linear program on a fine resolution grid, as I mentioned, takes an incredibly long time to run. They often weren’t completing on my local computer, so I resorted to remotely accessing the lab computers, which improved run-times. In this case, though, the improvement was from ‘Incomplete’ to ‘2 to 3 hours,’ which also wasn’t ideal. But I was able to get enough data by Monday, after running these programs in the background over the weekend, to draw up statistics and tables to compare the heuristic’s performance against the grid performance. Basically, what we found was that the grid solutions gave slightly better dwell-times, but took 3 or 4 orders of magnitude the time to run. This is good news for the heuristic! It seems reasonable to use it to get good solutions quickly.
Existing solutions to the problem of autonomously disinfecting rooms using UV light rely on manually selecting a single vantage point in the room and allowing the UV light to emit for a fixed amount of time. Usually, they pick a point in the center of the room’s obstacles. But there are a couple of issues with this paradigm. By the inverse square law of light, the radiation dose received by a surface decreases with the square of its distance to the source of light, so single point methods would struggle to disinfect far-off surfaces in a timely manner. Another issue to consider is occlusions, or objects that block the passage of light. In any realistic room-model, light will simply be unable to reach all desired surfaces from a single point, leaving parts of the room still infected.
On Monday, I set out to build some test rooms and use the entire pipeline to get disinfection paths through them. I had two main issues; after finding a TSP tour, I was running an optimal path-planner to find the paths between pairs of points set up by that tour. But these optimizing planners were taking far too long to run for very little payoff (because the path length wasn’t that much better than non-optimal paths found by other, faster planners). The more important issue was that the overall paths didn’t look as optimal as one would expect from our entire pipeline. Another way visualizations are incredible! Without them, I wouldn’t have any way of figuring this out. As it is, it was pretty much intuition and looking at the paths to see that they didn’t seem so good:
For the Tuesday meeting, I spent some time making heat map visualizations as Prof. Hauser had suggested last week. The idea is that from a visualization of dwell points, it would be useful to not only know which points are dwell-points, but how much time we need to spend at each of them (since some will probably require much more time than others). Another thing that he suggested would be useful is coloring obstacle edges by how much flux each of them receive. We know each of them must receive some minimum flux (as constrained by the linear program) but some edges will definitely receive more than others, and it would be worth it to see how “efficient” our solution is in making all edges get as much flux as required. As a heat map, the variance in coloring over different dwell-times (for the points) and flux-amount (for the obstacle edges) is continuous over some selected color map.
As I’d finished implementing the linear program optimization last week, I spent much of Monday using it on rooms I’d generated to make sure it worked and getting various results. For algorithms like this, where it’s not possible for a human to compute the correct solutions on their own for large examples, I find it useful to create very simple examples initially, where I can directly deduce what the answers should be. Then, the algorithm working for those small examples gives me more confidence that it’s working correctly. For example, I used a really small example of a single-obstacle room first to see how the program worked:
Over the weekend, I’d realized that I actually didn’t completely understand the rotational sweep algorithm. As it turns out, this isn’t a very well-documented algorithm, and all existing descriptions are slightly hard to follow in the same place (an edge-ordering requirement, where it’s a bit unclear what we’re sorting by). My mentor was similarly confused about the algorithm. I spent some time with him trying to parse it and get my implementation to work (by testing different edge-orderings). At about 10 PM on Monday night, we hadn’t gotten very far. At this point, I decided it might be a good idea to write a different algorithm for the visibility determination. Although its asymptotic complexity was worse than the rotational-sweep, it was a lot easier to implement. Sometimes, as my mentor paraphrased Donald Knuth, “early optimization is the death of productivity.” It can be better to have decent solution and work up from there, rather than taking thrice the time to get the best solution immediately.
Every Tuesday morning, I meet with a sub-group of Prof. Hauser’s lab, to discuss weekly progress. This sub-group is called the ‘Planning and Control’ group, made of people involved in the modules of path-planning and perception of the disinfection robot. The undergrad interns are supposed to present what we’ve done over the week during this meeting. So, Mondays are usually a checkpoint for me to really think about what I did over the last week and how to present that work concisely and clearly. This week, I’ll need to talk about testing the different TSP solvers, so I need to collect the detailed experimental results completely and do formal comparisons.
Over the weekend, I spent some time looking up different TSP solvers to use. It was actually quite challenging for me to find good TSP solvers to use. It takes a good Googler to sniff out the software worth pursuing. Over the last few years, I’ve realized that searching for things online is a skill that’s become increasingly handy to hone. It’s pretty amazing how much information one can get online, if one only knows how to search for it – and remembers to search for it. Anyway, this is something I’ve become better at over time, but in this case, I was initially only able to find two to three basic implementations, which did quite a poor job at solving some example TSP graphs that I tested.
My first week! My internship started with a group meeting involving everyone in the research project. Since this internship is going to be entirely online, the meeting was over Zoom (as will be all of them). Professor Hauser directs the Intelligent Motion Lab at UIUC, and many of the students in this lab are involved with this research project. I was introduced to the other interns and many of Prof. Hauser’s graduate students. I also heard, for the first time, about the scope of the entire project I was working on.