Murray:
So, Austin, I was googling you this morning. Bachelor of Geophysics and Computer Science. It sounds pretty interesting. Also, a minor in astronomy. Then you took that onto a master in computer science. When I started looking through, it looked hellishly Interesting.
Robotics and AR, computer vision, machine learning, deep learning, data visualization. This is ridiculously exciting. Then the experience that you've got with that is with water quality in Hawaii or GIS with NOAA.
Austin:
Yea, I had my hands in a lot of different things. Started off doing geophysics and moved more into astronomy. I got really into software as well, which transitioned over nicely into computer science. I was always doing some independent tinkering with ML and things like that beforehand, so, I thought it was something to jump into in grad school too.
Murray:
You got into software from astronomy, you said, not the other way around.
Austin:
Astronomy brought it back. I've been really programming for a long time from a young age. It really started with robotics when I was 13, 14 years old. And then I took a pause on programming.
I really enjoyed it and then a few years down the road in college I really got back into it. It started with data science application to astronomy data. It was less building software and more using scripting to analyze astronomy data and to try to come up with insights there. That showed me that there's a different application to coding and programming which sparked my interest.
Murray:
That brings us to why I snagged you in today. Machine learning and AR, they talk about it everywhere and now I've heard about it, I've picked up a little bit but I'm pretty sure I've only got half the picture. Let's say you're one of our users or an interested listener. You've heard about AI, you've heard about machine learning, deep learning, they all fit together some but why are there different names? What's what and what is it that you want?
Austin:
These terms get thrown around a lot and it is up to some interpretation but generally I see AI as the all encompassing term - exhibiting intelligent behavior and the algorithms that can do that.
Machine learning is like a subset of AI focus, the actual learning aspect. There's a few other algorithms outside of the ML bubble that would still be within AI. Things like search algorithms and things that apply to robotics among others. ML is really generally focused on the actual algorithms for learning and for exhibiting predictive analytics.
Deep learning is like another subset of that which is, like, the evolved form of machine learning. It takes it a step further with neural networks and more complex algorithms.
Murray:
How complex does it need to be in AI or machine learning? What makes an AI, when you go from an Excel spreadsheet with a formula, which is the sum of these at some point. I get that you're trying to train an AI to do something but there's an in between and there's a whole lot of just straight algorithms, which are not much different from a formula in an Excel spreadsheet.
When does something become AI rather than the math that I'm using in the spreadsheet?
Austin:
Both are very valuable and it's something that using a combination of both even in cases where ML applies really nicely is still a really good thing to do.
In general, with ML, it's great in cases where the data is complex. Like, too complex to code up these deterministic, rule-based programs. So if you're looking for scale, if you're trying to solve a task that is tedious; you have to do it at scale; that’s complex; that essentially can't be deterministic or there's just too many factors to analyze - this is where it comes in really nicely, where you can start to dive into those relationships between these factors and between this data.
That's focused on the relationships between these complex factors, predictive analytics. But there's that other side that's more like an exploratory analysis where you're just doing simple formulas, but they go a long way, averages, aggregations, filtering, etc. That's just as important for insight, especially in our case too.
Murray:
Our case and insights are two things. I was just reading your research work on precipitation and weather and such. That's vastly different to reality capture and the work that we’re doing. And not necessarily just reality capture but the industries we're in, in general. There's reality capture, which is central to us, but there's real estate on one side and there's construction on the other. How is AI breaking into these spheres?
I see a lot of research at university level. If you want to look at point cloud segmentation. It's all university level stuff. It's not commercial level stuff, but it's becoming more commercial quickly. Is it just about point cloud segmentation? What else should we be interested in that machine learning can be helping?
Austin:
Point clouds are just one aspect of building data. Everything surrounding point clouds, that's the main application. Close to what we're working with but there's other types of data in addition to the point cloud.
There's the building metadata. And as we build our database of buildings, these buildings have certain properties that are not necessarily just the geometry. That metadata could be something that we could look into to do predictive analytics on. One thing would be, for example, if a user knows certain information about their building but doesn't have the digital twin for that building, they have just have certain metadata about that. Something could be said just from those parameters based on our analysis of buildings that we've already done within their portfolio. They can interpolate or we can interpolate any information based on our previous work.
There's that building metadata, there's the geometry and there's other parameters too.
Murray:
It’s looking into the model and trying to glean whatever data you can get out of the model. When we're looking at BIM, it seems to be way easier when there's metadata either within the model or around the model. At least - that a wall is a wall, and at least if you're aggregating areas that are tagged as areas that's almost way easier than, say, proper semantic segmentation. But point clouds themselves, as far as I understand, point clouds are the poster child for complicated machine learning because the data is really simple and ridiculously unstructured and so it's very hard to deal with a set of points that have only got X, Y, Z, and rgb and that's it. Then there's the point. It's way easier to deal with something that's tagged as a wall or tagged as an area.
Austin:
Because of that, there's a lot of pre-processing and dedicated work that goes into the processing of point clouds and just getting it ready to even have these algorithms be applied.
It's not as straightforward as just using machine learning on tabular data. There is that level of complexity.
Murray:
It's all about data science. If what AI is, what machine learning is, and what deep learning is, then data science, wraps the whole thing, am I right about that?
That data science is almost one level up from AI that I suppose is not necessarily AI but it's looking at this data and figuring out what to do with it. Is that it? Is that actually what we're trying to give to our users? We're trying to say, if you apply data science to this, then you can get a whole lot of insight. We've just got to see how you apply the data science to it?
Austin:
so. There's things that we can do and there's things that we can empower our users to do themselves. There's something like giving users the flexibility to dive into the data, giving them certain tools to dive into the data and realize those insights.
But really it's not just machine learning, it's not just predictive analytics, it's also just diving into the data, doing that exploratory analysis, trying to see if there's trends or correlations within the data. And that's all something that can be done with simple formulas. The approach is pretty straightforward
Murray:
That's something we're trying to do as well. If you wanted the tools to dive into the data that’s what we're doing with IPx. What tools do you need or how would you dive into the data?
If you're a user and you've got a portfolio of buildings, what is that going to look like? What tools do you think you have?
Austin:
a big part of it is being able to view, that's one. Having a way to view the actual geometry of the building. Being able to see the relationships between the elements in the building but also having the model is a great thing.
But there's also things we can do to simplify that experience..looking at different data aggregations. I also think it's a combination of simplifying it and also providing users with the flexibility to dive as deep into the model as they would want to go. Providing the information in a consumable format.
Murray:
What you were just saying, simplifying the experience and simplifying the data. What you're always going to have is, if you're looking at any BIM model, any model, step one is that, you have too much data, you've gotta filter it out to find what you're looking for.
Austin:
Yes, simplifying it down, providing that way for users to dive into a singular building or comparing between the two, or comparing between multiple might be something interesting in a portfolio.
Murray:
And is this a thing that we can use machine learning for? Are these other machine learning tools, can you throw that at this and, and get inside? What I'm wondering is, aside from BIM and Integrated Projects is all the hype around machine learning and there's a lot of it around reality capture.
Where is that pointing the user? Like what should the user be interested in or diving into to get the most out of what's possible with machine learning?
Austin:
When it comes to machine learning applied to building data it depends on what our data set looks like. What information do we have? If we only have a certain number of parameters or a certain number of factors, machine learning might be overkill or not the best approach but as we start to build our data set, we realize there's more information out there, things like energy analysis. Now we're bringing in some of this outside information and now we have this new data that's related to the building. As those parameters start building we'll start to find that building this building data is more complex and too complex to extract, actionable insight in a hard coded sense. Which is when we can start to look at applying machine learning. But really we would need enough data. When we have a certain number of parameters, we'll need enough data to actually be able to train these algorithms and to do something useful.
It depends very much on what information we have and what we are looking to get out of it.
Murray:
So, in a general sense for machine learning when we've got enough data, typically you would use huge amounts of data to train an AI to do anything you want. Ridiculous quantities of data in order to train the AI which with a set of BIM models, you're not likely to get but once you start adding in climate information or embodied energy information, then the data that we've got around a building, around a set of buildings is growing.
On one hand you wanna look for opportunities in AI. On the other hand, you want to be careful about diving into the hype. That's the thing about complexity, isn't it? You’ve got to choose the right tool for the job. While AI is an exciting tool, make sure that you put the right problem for it, you want complicated problems.
Austin:
It really shines where the problems are complex and where you have a lot of data. Even within ML, there's certain tools that excel more than others. Earlier ML algorithms, some are better at smaller data sets, especially with deep learning when you're looking to train these huge networks. That’s when you really need a lot of data.
There's some that are better with minimal amounts but even within that, taking a step back, we really have to figure out when would be a right time to apply this and if we would even need it based on what we're trying to show.
Murray:
The problem's got to be worthy of the engineering time you'd have to put into that problem. When you get to the digital twin, point clouds are one but as we go through the BIM process of modeling, you've got a digital twin at the end. Especially if you've got a portfolio of buildings perhaps with real time sensors then the opportunity of getting loads of data you can throw at your AI, that's when you have loads of data available. You just have to figure out what the best use for it is. Or find problems where this is the best solution.
Austin:
The digital twins serve as a great backdrop to including more information. As we start to aggregate that information from different sources. There’s a point where it would be complex but the key would be setting up those data pipelines to be able to properly aggregate and clean that data and prepare it for predictive analytics. If that data is messy and we don't have the set of features for each building it's going to be a lot harder to really do anything with that.
It’s a big factor here. Especially since it seems like a lot of the data in the industry is coming in from different siloed places. In general, people don't have access to every piece of information about their building in the same way.
Certain buildings might have certain information that they either didn't hire the service or they didn't need that information. For certain buildings it's missing and for some it's there.
Murray:
It’s just going to throw any analysis off like crazy.
Whatever it is that you're trying to analyze like the emissions of your building or building energy. Every single building, no doubt, is going to be unique. If you're just looking at energy, there's multiple ways you could be getting your energy. The data scientist has to take this incoming data, clean it, but that's where it gets complicated. For instance, in our industry, if you're going to throw out building data which don't have the right electricity input to give you clean answers, you're going to throw out too much data.
When you're looking at the portfolio of the buildings and the size of the dataset, no matter how big a company you are, an individual company's going to have a handful of buildings or a few hundred buildings at the outside max. Even if we've got a quite a nice data set of buildings, it's still a few hundred to a few thousand buildings.
We're not going over a few thousand by a long way which makes it a lot more complicated. That's a tiny data set from your point of view.
Austin:
That's tricky. There's probably some tool or some algorithm out there that does play nicer with smaller data sets, but really, we need to take a step back and see what data we have. What do we wanna do with it? What's the best tool for the job? That requires a whole study in and of itself.
It's going to require some combination of people who are versed in machine learning, but also industry experts that are able to know the data a little bit better and might be able to have some insight on how we can pre-process this data.
Murray:
And of course you don't wanna start off with a solution and go out searching for the problem that you're going to solve. With this machine learning approach, if we can do machine learning, that's fantastic, let's go and figure out what we can fix with it is the wrong way. We should get a problem in front of us and figure out the best tool as opposed to having a tool and going to go find the problem. On machine learning, that's BIM at the end of the process, the semantic segmentation and creating BIM out of a point cloud you can do it without much human involvement.
Do you think you can do it without much human involvement?
Austin:
It’s a tough one but it can be done. A good first step would be to try to minimize human involvement rather than just completely replace it. There's a few points in the processing of point cloud to BIM. Having that human augmented approach would work really well. There's certain parts of the process where we find that doing the segmentation and doing the machine learning to, or whatever algorithm to make that happen, might be easier, but then there might be parts where it's just more simple for a human to draw a bounding box around the outer walls of the building real quick and then pass it to some automated pipeline. Diving into that and really understanding which parts can we look to automate.
Even then within the Point Cloud itself, identifying what these current algorithms are really good at. Identifying what they are, we might find that MEP is much harder to automate than the outer walls or furniture even.
Murray:
The point cloud is always going to have a set demands or fuzziness to it in terms of camera focus, which doesn't matter when you're looking at a wall or a floor slab, but it matters an awful lot when you're looking at a table leg, or a piece of furniture.
While there might be quite simple algorithms to segment out the big planes like walls and floors, you're going to get down to collections of little things, which are chairs and tables and computers and bookshelves and things like that. It’s going to be way harder to just blanket all the vertical planes, clearly walls, and just cut them out.
That's fine for the beginning. Then as you get into the detail - this is actual traditional machine learning, right? You're going to have to give the AI a whole bunch of toilets to look at so that it knows what a toilet looks like, then be able to recognize that in the point cloud.
Does this point cloud of a toilet or basin look like what it's expecting? It's how successful that is and how complicated it is. Are you going to have to give the AI something to show it every single object that it's going to learn?
Austin:
From what I've seen, these classification algorithms work really well in cases where you're able to define those classes. It's much easier to say, “Hey, classify this object into one of these 10 classes.”, versus, “identify what this object is just based off of anything.”
You almost need to have, based on the limitations of these algorithms, those classes. Rather than dumbing it down to “chair”, “desk”, very simple objects - giving it the ability to classify within those categories is a much easier problem than having hundreds of classes. It is possible there's algorithms that can do things like that.
But the more constrained, the more powerful and accurate these algorithms are.
Murray:
So, give the algorithm big enough chunks of categories for it to divide something into.
Austin: It's looking at what we're producing and trying to see.
Murray: If we're able to hone in on those classes is that enough? And do we have a solution for cases where these objects might fall out of any of these classes, what would we do in cases like that?
I imagine the problem with BIM might be different in other places. The segmentation of it rather than the semantics of it. Where do walls start and floor finish, or is the problem with point cloud specifically compared to 2D video or a 2D plus depth video?
What I'm told is that point clouds are unbound. If you're looking at a picture on a screen, you can allocate a pixel anywhere. We've got sparse point clouds that we are dealing with there. There's a lot of black space in between whether a piece of blank space is something not scanned, or whether it's air which was scanned, or where the whole point cloud starts and ends. Where the wall that you're trying to segment starts and ends. It's more complicated because we are doing the whole thing in 3D. Is that right?
Austin:
I would agree. With data like point clouds or any sparse data, the way that the data is encoded really plays a huge effect on the performance and the capabilities of some of these predictive algorithms. That's why there's a huge effort surrounding the libraries to process these point clouds.
If a typical method of dealing with sparse data might not be appropriate for some of the analysis that we're looking to do, that's a really key aspect on how that data is consumed by the model. Point clouds are tricky in that sense. There's a lot of libraries that have been released recently that extract the difficulties of consuming point clouds, as well as, implement some of these state of the art neural networks for classification and segmentation of them.
Murray:
Is it just that it's way easier for us to think of use cases with point clouds than with the rest of the BIM process or the rest of the real estate or AEC owning process? If you're looking to use AI or machine learning, you need to have the problem with any of it. Now, point cloud segmentation, that's obvious. If you can do point cloud semantic segmentation it's going to be with some AI.
Whereas once you get a portfolio of buildings together and you want to see what you're doing with that portfolio you're not necessarily going to have AI use cases tripping over themselves to get you those use cases. It’s few and far between for now at least. I imagine this is going to change in the future.
Austin:
Especially as we start to learn more about what data is available when it comes to this information. There's companies out there producing a ton of different data points.
It's easier for us to wrap our heads around machine learning applications to the geometry to point cloud information. I know there's been a lot applied to the automation of layouts and design in general. That’s also been pretty popular.
Murray:
We haven't spoken about that at all. There's the people who are designing with AR. They're doing some pretty great things, aren't they? What have you been looking at like that? Is this what test fits are doing?
Austin:
When it comes to test fits, that plan and arrangement of a space. I've seen applications of generative, adversarial networks to that problem. As well as the extraction of 2D information from point clouds. The CAD lines as well.
Murray:
And that seems to be pretty successful. The CAD lines from point clouds, that works pretty well.
Are they just trying to look in 2D? Have they just simplified the problem? Is that why drawing it in CAD or these things work pretty well? Because if you just look at it in 2D and then you can do it.
Austin:
They simplified it. They removed that extra dimension. I guess that’s what makes it easier, other than some obvious simplification. But it is interesting and it's a good first step and a really promising approach to producing some of this information and these plans from that raw data.
Murray:
Produce information but try to make that information clean enough that that doesn't take a human just as long afterwards to clean up what the AI did. That's the goal, really. That's the goal there, I would say, at the bottom of my list of things, is digital twins.
Aside from learning point clouds, which is great, I have the sense that if you own a bucket load of buildings you want sensors inside of them getting all of that data into a digital platform and analyzing it constantly. It might not be machine learning on day one.
It might be machine learning eventually, but, getting all this data about your buildings, seeing where you find the insights or the value within that data. It's great to collect as much information as you can about your asset but that can be pointless and it can just cost you money.
What you need is insights if you're building on it and I'm guessing that's where the computer algorithms are going to help us.
Austin:
When it comes down to it, with all of this trying to understand the data, what it means, what is my data trying to show me? And how can I use this to inform my decisions? It’s seeing what every use case is, unique to the individual. There's a certain subset of information that's useful across all use cases.
That's something we can look into. There's some standardization there. What's the most useful information we can look at? Machine learning isn't necessarily the best approach when it comes to just the simple information, tabular information, about a building. It's more geared towards these more complex automation tasks.
Murray:
Whatever tools you need, getting that information out of your building, whether it's a simple tool to look at tabular data or whether it's something more complex, it's about choosing the right tool really isn’t it?
Austin:
Something interesting too, not just applied to bim but to data in general, is incorporating the use of language models that have really hit the mainstream. Going from simple language to some sort of query or some sort of information might be really useful for having the power of the data but also simplifying it.
For example, simply asking a question, “how many chairs do I have?”
Then being able to extract a number or to provide a number back. Converting that natural language into a query into information being returned to the user could be really powerful. It could be a really powerful way of diving into a data set. Especially when it comes to information about your portfolio of buildings.
I'm not exactly sure what questions users would be asking but if we can hone in on just the format of how those questions are being asked then we can essentially tune these large language models to simplify the data.
Murray:
Understanding the format of those questions to be able to make that query because now you're going to be making a potentially infinite number of queries of the data, depending on the input question.
Austin:
From what I've seen, and I've only tinkered around with it but with some of these large language models you can tune them by providing examples. If we provided, for example, 10 examples of different questions and how those questions relate to a query, that correlates to an actual action on the back end of the platform. Then the language model can start to recognize, “okay, this is a question about this information. Now I can pass this parameter.”
We can also structure a guide or a formatted way to go about asking these questions. I could see a combination of not just an open text entry but a set of dropdowns or something like that to steer users into asking these formatted questions. How do we not necessarily have to show every parameter of every building in our data set, but how do we give our users the power to ask questions about their data by however means that they would want and provide a simple way to ingest the information that they're looking for.
We have a lot in between the geometry, between the metadata, we have a lot of different data points that we're presenting.
Murray:
And actually visualizing that data is a whole separate issue. Visualizing 3D BIM data. Easy, but there's all sorts other than a 3D model than how you’re getting that data out and then visualizing it. It's the top of a 3D model on the web and all sorts of different data.
It's not straightforward. It depends on what you're asking. What data you're looking for.
Austin:
We're also limited in the sense of what can show in a presentable way on our platform. So being clever about how we're providing users access but hiding the complexities for the unnecessary information to not overwhelm. There’s that balance.
Murray:
It’s a complicated balancing act because we want to show you as much data as we can about your building but we also wanna make it simple, legible and easy to navigate. It's the fine line between giving as much depth as you can and making it legible.