automation

Building Healthy Asset Models

Understanding how to integrate with your client’s asset and facility management requirements at first seems like a daunting task, however getting it right can be as simple as asking the right questions.

From the perspective of a project manager or even an asset owner, some of the questions that should be asked are:

  • When will the facilities management team get access to the model? Will it be during the design phases of the project, or will it be once the design is complete?
  • Will the facilities management ream be able to dictate to the design team what information is incorporated in the model?
  • Will the facilities team be able to review the model before construction and commissioning?
  • Will the facilities management team own the model when it is completed?
  • Who maintains the model once handover is completed?

Then there are questions that the more technically minded team members tend to focus on around information requirements

  • What information does the facilities team require?
  • Do the facilities team actually need all of this information?
  • What data formats does the facilities team require?
  • Do they have existing non-BIM facilities packages that require integration into the new BIM enabled system? Can it be integrated at all?

If you’re interested in how I approached the problem of recording existing assets in one of Australia’s largest health precincts, take some time out to check out my AU2018 presentation

https://www.autodesk.com/autodesk-university/class/Building-Healthy-Asset-Models-Case-Study-Existing-Asset-Recording-BIM-2018

With a special guest appearance about halfway through from probably Autodesk University’s most famous cat, Burrito. One of the dangers of presenting remotely.

Don’t Have Dimensions in Families for COBie? Don’t Worry!

So you’re new to COBie and a deadline is approaching, your favourite project BIM manager comes up to you a few hours before the deadline and tells you “We have to do dimensions.. on every element in the COBie drop. You have your dimensions ready right?”

Well there is no need to stress, as always there is potential for Dynamo to come to the rescue. I put this one together in Dynamo v1.3, but I have tested it in v2.x as well and it still works just fine, just a note though. If you save your old 1.3 graphs in 2.x, it’s now a 2.x graph forever.

The way that I approach the majority of my COBie work is through a 3D view and a schedule with a set of very hand filters, so I’ve continued down this route for my Dynamo graph and I start it off by getting all the elements in the active view – my 3D COBie view.

Just in case there is anything in the view that isn’t a family I’m getting the element type of each element, convert that value to a string and then filter the list based on the string “family”. This is because every family in the view will be prefixed with Family Type: whereas non-family objects will not.

Once we have the filtered list of families, we need to take the bounding box of each element, we do this with an Element.BoundingBox node. Using Spring Nodes, we next use Springs.Geometry.Extents to separate out each dimension individually.

The next step is a bit of simple math. I want to ensure that the length parameter is always longer than the width, so with a few if statements and some greater than and less than nodes, the top pair of nodes always provide the smaller number that will populate our width parameter and the lower set of nodes now provide the large number which will feed into our length.

Finally, we populate our parameters with the correct information. Note that your parameters must be set correctly to type parameters for this to work, if you have incorrectly made them instance parameters the script will not work but if you’re paying attention you’ll see that the hint is in the name of the parameter.

Now, don’t forget that with the COBie element data, you should be nominating dimensions inclusive of the maintenance requirements for that object, you could always add a little bit of extra fat to your dimensions, but I would highly recommend approaching COBie and BIM in general the right way and including spatial elements that indicate the overall dimensions including maintenance access similar to what is shown in the electrical switchboard below.

 

For those that want to get stared a bit quicker, I’ve provided my graph for download below

Merging IFC Files with BIMServer

A question came through via email the other day in the office

We’ve been trying to issue federated IFC files, but the combined size balloons up to 20x the original size. Is getting these models to a consumable size a wild goose chase?

Well, the short answer is no, it’s not a wild goose chase and it is possible to generate federated IFC files that are almost exactly the same file size as the sum of the smaller files.

The long answer?

Open up this page in a browser:

https://github.com/opensourceBIM/BIMserver/releases/tag/1.4.0-FINAL-2015-11-04

Download 1.4.0-FINAL-2015-11-04.jar, save it somewhere on your local machine. Don’t use spaces in the folder names when you save it or the tool won’t work properly, use underscores if you need a space.

There is even quite a comprehensive help document with information on configuring the BIMServer tool:

https://github.com/opensourceBIM/BIMserver/wiki/JAR-Starter

There is not a lot to do in the setup process though, I just made sure I was set to local host and that there were no conflicts with my port and then clicked start. After you click start, it will take a while to run through what it needs to do, when finished you’ll see a line that looks like:

02-10-2018 11:23:33 INFO  org.bimserver.JarBimServer – Server started successfully

Now click the launch web browser button. It will take you to a window that looks like this:

For all the work that you are going to do from here on, you need to use Chrome or similar. Internet Explorer will not work. Also note that if you log yourself out, you will need to log back in with your email address as the username, not your name.

Fill out this page with your details, you’re creating a user here. You will be able to login with these details again in the future. Because this is a web application, we could actually host this internally and have a permanent server setup, it’s just this workflow is using it as a web application hosted on your local machine.

Now that you have created your user and logged in, you will now be in a project browser. Select Project -> New Project

Fill out the details as required to create your project:

This provides essentially a bucket that you store all of your data. The next step is to create sub-projects to add your IFC files that you want to merge

To then upload a model to each sub-project, you need to use the “Checkin” option. It will take a little while to import the files depending on how complex they are.

My files are between 30 – 70mb each depending on the file. They take about 4-6mins to import using a laptop with an i7 6820 CPU

You’ll know that the files are importing because the IfcGeomServer will be using CPU:

You can view your model in the web browser to verify everything is in the correct location by clicking on the eye (grey = off, coloured = on)

For the merge, you literally just have to click the little arrow to the right of the top level project, and select download.

Depending on how big your models are, the problem might be if you have enough memory to perform the merge.

If the download doesn’t work, hit the back button or refresh the main page before you try and download. You need to see an eye on the top level project for the download to work.

You can verify your IFC file either by loading it into a new project within the BIM Server web application, or you can load it into Navisworks.

My resulting IFC file is 173mb which is exactly the sum of my four individual files.

If you want to try reduce the size of the IFC file, use the Solibri IFC Optimiser

https://www.solibri.com/solibri-ifc-optimizer

After running the optimiser, the file size was reduced to 124mb.

 

 

Placing Multiple Views on Sheets With Dynamo

Keeping on the theme of Dynamo and drawing setup, I had a series of MEP models that I needed to setup that had 2 views that needed to be placed on each sheet, a main view and a smaller inset view.

I started from my sheet generation Dynamo graph and ran through a number of different options to enable to graph to place multiple sheets on views in the correct location.

The method I found posed the least problems in the process was to add an extra column to my Excel file for the names of the inset views

As a bit of ground work, I needed to figure out where I wanted my views to be located, so I made up mock sheet with the views placed where I wanted them to sit on the sheet

I then threw together a quick graph that allowed me to select the viewport I’ve placed on the sheet with the Select Model Element node and running that through the Rhythm node Viewport.LocationData I can get the centre point of the viewport box. This centre point of each viewport will be the values used when we move the viewports later.

Once the viewport locations are picked up, we can get to modifying our original sheet creation graph.

The first modification that we need to make is with the additional column in Excel. Copy the original section of the graph and change the code block to 3 so that we are reading from column D of the excel file.

The next step we filter our views again, but this time we’re filtering two separate lists of views, things get a little messy but it’s still reasonably easy to manage. Note that I dropped a List.Clean node in the mix as I was having views return with no data, the List.Clean node removed empty and null values.

Remember that if you’re going to clean the list, you need to feed the other nodes with the cleaned list, do not mix and match between clean and unclean lists, otherwise you’ll have a bad time.

Now we should have two lists of element ids, one for our main views and another for our inset views.

Our main views get fed through the same series of nodes from the moving views on sheets post.

So while all of this is happening, where the output of the Sheet.ByNameNumberTitleBlockAndViews node shoots off a second time to tell our insets what sheets they need to be placed on.

The problem you will stumble into when placing views in this method is that if the sheet hasn’t yet been created, the inset views won’t be placed. The way that I decided to handle it was by using a Passthrough node from the Clockwork package. The Passthrough node implements an order of execution. It will wait for the node threaded into the waitFor input to complete before sending on the data threaded into the passThrough input.

I fed the Passthrough node with the results of moving the main viewport on each sheet, once this main views have been moved the sheet numbers are sent through to the Viewport.Create node.

Running the script, we end up with views placed exactly where they want them. The example GIF below is recorded in real time and runs for 14 seconds from start to finish, which includes checking each sheet has been correctly created.

 

This is great and all, but what happens when you have two difference types of “main” views, one where you want placed along with an inset like the above example and another where you ant the views placed centrally?

The way I found best to handle this scenario is to add a String.Contains node along with an if node to control the location of the viewports depending on the name of the view itself.

In this particular example, the names of the “main” views are being checked for if they contain the string Platform Level_ and if they do, the views are being placed centrally on the sheet, otherwise they’re being placed offset from centre to allow for the inset view to be placed on the same sheet.

The nodes labelled platform view x, main view x, platform view y and  main view y are simply code blocks that I have renamed so I know exactly what they are.

Tagging Invert Levels

Over the years, I’ve seen a lot of err.. solutions for tagging invert levels. From adding shared parameters to your pipe families through to using Dynamo and everything in between.

Ignoring the fact that you can’t add parameters to system families like duct and pipework, there is a far simpler way to get what you’re after.

Ever heard of the spot elevation tool?

The key is in the settings that you use.

The settings that I use in my templates are as follows, the settings to change are highlighted below

In the units format dialogue, change the your settings to match the following

And of course, don’t forget to select the bottom elevation!

Practical Dynamo – Moving Views Based on Another View

Okay, so we’re on a roll with practical Dynamo usage. Last week we looked at placing views centrally on our sheets, but what if you didn’t want the view centrally placed? What if you wanted views placed in the same location on all sheets maybe in the top left of the page?

As always with Dyanmo, there is a solution for that. Again we’re going to be using the Rhythm custom node package to get the work done. This method requires one sheet to be used as a template that all the following sheets are based on.

In our example this time around, where we want the view located is in the top left (shown on the left) but by default Dynamo places our views in the bottom left (shown on the right)

 

This workflow can be easily integrated into our previous graph where we created new sheets in Dynamo using Excel however for this example we’re going to create a standalone graph. For this example though, it’s assumed that this time around though that you already have all the sheets and views required created.

 

First we start by taking all of our sheets, we do this simply by using Categories and then All Elements of Category, after that we get into our Rhythm nodes.

First we get a list of all of the viewports on our sheets using the Sheet.GetViewportsAndViews node. Run the list through a List.Clean node to remove the empty list entries. This leaves us with just the viewport entries.

Meanwhile, we also need to get the viewport from our template sheet. In this instance our template sheet will be drawing H101 which we’ll select using the Sheets node and then we’ll feed that node into the Sheet.GetViewportsAndViews node which are both from the Rhythm package.

And finally we feed our data lists into the Viewport.SetLocationBasedOnOther node, which again is from Rhythm. It’s as simple as that.

Hit the run button and watch the magic happen.

Site Extraction with flux.io and Dynamo

By now, most people in the industry would have heard of flux.io, a spin-off from X (formerly Google X). Recently, flux.io updated their site extraction tool which pulls data from free open source datasets, Open Street Map and NASA. When combining with Dynamo, it couldn’t be any simpler to pull in topography information to your Revit model.

So how do we get started with this new-fangled technology?

Firstly, you’ll need a flux.io account. Once you have that sorted head on over to https://extractor.flux.io/ Once there you’ll be greeted with a Google map where you can search for your location. The map system works exactly as you expect it to. Simply drag and resize the selection box around the area you’re interested in and then select what you want from the menu on the top right of your screen.

When your data is ready, you can open it in flux and review the results. You simply drag and drop your keys from the column on the left into the space on the right. You can pan, zoom and rotate your way around the the 3D preview although as someone that works in Revit and Navisworks all day long I found that the controls aren’t the easiest.

Struggling with the navigation?
right mouse button = pan
left mouse button = orbit
scroll button = zoom

So all of this is great, but how do you get this into Revit? It’s actually incredibly simple.

You will need to have both Dynamo and the flux.io plugin suite installed, but once you have those installed you’re only a few minutes away from generating a Revit topography.

To get started you will need to login to flux.io through Revit and Dynamo, if it’s your first time using flux.io you might have to approve the connection between Revit/Dynamo and flux similar to what you would when sharing account information with online services and Google or Facebook.

Find the Flux package within Dynamo and first drop in the Flux Project node.

Once you have your flux project selected, it’s just three more nodes. Drop in the Receive from Flux node and select topographic mesh from the drop down. From there push the flux topography into Mesh.VertexPositions and then finally into Topography.ByPoints

Comparing the flux topography in red against the professional survey in blue, we can see that the flux topography is no replacement for a real survey, we are looking at a 5-8m difference between the survey and the flux data. Thankfully, surveyors aren’t going to be out of the job any time soon. This is the case on the example site in Sydney only though, other sites are far more accurate depending on where the source data is coming from. Remember the flux data is coming from a combination of sources including survey from satellites which leads to varying levels of accuracy. You shouldn’t rely on open source data like this as your sole source of information. You should be referring to relevant site survey information to verify the data against.

The inaccuracy of the data though doesn’t mean that the flux data is useless. Provided that you’re able to reference the flux data with known survey data and adjust to suit, this provides an excellent opportunity for using the flux data to fill in missing information surrounding your known survey and site. You then have opportunity to use the data for visualisation in concept stages or flyover presentations of large sites or precincts.

 

Using Dynamo to Generate Pipework Hangers

If you’re finding yourself modelling more detailed models that reflect proposed fabrication or constructed works, Cesare Caoduro over at the BIM and Others blog has a great step by step tutorial on how to use Dynamo to generate Unistrut style pipework supports.

If you’re mechanical or electrical it would be quite easy to adapt the first portion of the script to generate the same type of supports for ductwork and cable trays.

You can check out Cesare’s post here

Consistency is Key. Setting Project Info With Dynamo

Over the last 3 months I’ve been busy working hard on coordinating the BIM for an existing infrastructure study of a hospital. The site consists of everything from heritage listed sandstone buildings constructed in the 1800s where for obvious reasons there are no existing drawings to a building that’s currently in the final stages of construction and has been fully designed and coordinated in BIM. The infrastructure study involved locating assets and services that interconnected between buildings within relatively accurate space within the BIM at LOD 200 as per the BIMForum guidelines.

When it came to the BIM, we decided to work with one building per MEP model which meant we had 28 MEP building models, 28 architecture building models that were created using a series of stacked DWG files and 4 site models. The obvious problem with so many models was going to be the consistency of the data and how we would go about verifying that data. Ensuring that we had all 60 models with the same information consistent information was a mountainous task that would have taken an exorbitant amount of hours to complete if manually reviewed, even if utilising BIMLink.

Enter stage left: Dynamo.

We used Dynamo far more extensively on this project than any that I have worked on before. Normally I’d work with little snippets to process small amounts of data and automate minor repetitive tasks, but this project was a real BIM project; there were no traditional drawing deliverable which actually seemed to genuinely baffle newcomers to the project. The deliverable was the federated model and more importantly the information contained within all the individually modeled elements. A few hours on one of my Sundays and I ended up with what you see below

2016-09-16_14-30-00

That structured mess was able to verify photo file names and associated photo URLs, it verified asset codes were correct and if they weren’t, it generated new asset codes in the required format, it also checked and corrected all the information required to generate those new asset codes and finally probably the simplest part of it all, it filled the project information parameters for us. It was run on all MEP models, with another run on all the architecture models that we created.

Although we were able to automate a lot of really mundane processes, they were for the most part fairly project specific so even though the Dynamo script itself was invaluable to the project, other than the experience provided it doesn’t hold that much value for future projects. There was however one custom node that I put together for the population of Project Information parameters that will probably get used again and again on projects in the future.

2016-09-16_14-48-01

Each input of the node is filled with a string for each individual parameter. In the project, the building name/number parameter relied on the levels within the model being named correctly for which there was another portion of the script that checked that the naming conventions for levels were followed.

The processing of the data itself is performed by Python code inside the custom node, after which the output showed the data that has been filled. You can either pick the custom node up from the MisterMEP Dynamo package or if you want to recreate this yourself the Python code is below

 import clr
clr.AddReference("RevitServices")
import RevitServices
from RevitServices.Persistence import DocumentManager
from RevitServices.Transactions import TransactionManager

doc = DocumentManager.Instance.CurrentDBDocument
projinfo = doc.ProjectInformation
#The inputs to this node will be stored as a list in the IN variables.
OrgName = IN[0]
OrgDesc = IN[1]
BuildNumber = IN[2]
ProjAuthor = IN[3]
ProjDate = IN[4]
ProjStat = IN[5]
ProjClient = IN[6]
ProjAddress = IN[7]
ProjName = IN[8]
ProjNumber = IN[9]

TransactionManager.Instance.EnsureInTransaction(doc)


projinfo.OrganizationName = OrgName
projinfo.OrganizationDescription = OrgDesc
projinfo.BuildingName = BuildNumber
projinfo.Author = ProjAuthor
projinfo.IssueDate = ProjDate
projinfo.Status = ProjStat
projinfo.ClientName = ProjClient
projinfo.Address = ProjAddress
projinfo.Name = ProjName
projinfo.Number = ProjNumber

TransactionManager.Instance.TransactionTaskDone()

elementlist = list()
elementlist.append("DONE!")
elementlist.append(projinfo.OrganizationName)
elementlist.append(projinfo.OrganizationDescription)
elementlist.append(projinfo.BuildingName)
try:
elementlist.append(projinfo.Author)
except:
elementlist.append(list())
elementlist.append(projinfo.IssueDate)
elementlist.append(projinfo.Status)
elementlist.append(projinfo.ClientName)
elementlist.append(projinfo.Address)
elementlist.append(projinfo.Name)
elementlist.append(projinfo.Number)
OUT = elementlist
#OUT = "done"