Understanding how to integrate with your client’s asset and facility management requirements at first seems like a daunting task, however getting it right can be as simple as asking the right questions.
From the perspective of a project manager or even an asset owner, some of the questions that should be asked are:
When will the facilities management team get access to the model? Will it be during the design phases of the project, or will it be once the design is complete?
Will the facilities management ream be able to dictate to the design team what information is incorporated in the model?
Will the facilities team be able to review the model before construction and commissioning?
Will the facilities management team own the model when it is completed?
Who maintains the model once handover is completed?
Then there are questions that the more technically minded team members tend to focus on around information requirements
What information does the facilities team require?
Do the facilities team actually need all of this information?
What data formats does the facilities team require?
Do they have existing non-BIM facilities packages that require integration into the new BIM enabled system? Can it be integrated at all?
If you’re interested in how I approached the problem of recording existing assets in one of Australia’s largest health precincts, take some time out to check out my AU2018 presentation
Download 1.4.0-FINAL-2015-11-04.jar, save it somewhere on your local machine. Don’t use spaces in the folder names when you save it or the tool won’t work properly, use underscores if you need a space.
There is even quite a comprehensive help document with information on configuring the BIMServer tool:
There is not a lot to do in the setup process though, I just made sure I was set to local host and that there were no conflicts with my port and then clicked start. After you click start, it will take a while to run through what it needs to do, when finished you’ll see a line that looks like:
02-10-2018 11:23:33 INFO org.bimserver.JarBimServer – Server started successfully
Now click the launch web browser button. It will take you to a window that looks like this:
For all the work that you are going to do from here on, you need to use Chrome or similar. Internet Explorerwill not work. Also note that if you log yourself out, you will need to log back in with your email address as the username, not your name.
Fill out this page with your details, you’re creating a user here. You will be able to login with these details again in the future. Because this is a web application, we could actually host this internally and have a permanent server setup, it’s just this workflow is using it as a web application hosted on your local machine.
Now that you have created your user and logged in, you will now be in a project browser. Select Project -> New Project
Fill out the details as required to create your project:
This provides essentially a bucket that you store all of your data. The next step is to create sub-projects to add your IFC files that you want to merge
To then upload a model to each sub-project, you need to use the “Checkin” option. It will take a little while to import the files depending on how complex they are.
My files are between 30 – 70mb each depending on the file. They take about 4-6mins to import using a laptop with an i7 6820 CPU
You’ll know that the files are importing because the IfcGeomServer will be using CPU:
You can view your model in the web browser to verify everything is in the correct location by clicking on the eye (grey = off, coloured = on)
For the merge, you literally just have to click the little arrow to the right of the top level project, and select download.
Depending on how big your models are, the problem might be if you have enough memory to perform the merge.
If the download doesn’t work, hit the back button or refresh the main page before you try and download. You need to see an eye on the top level project for the download to work.
You can verify your IFC file either by loading it into a new project within the BIM Server web application, or you can load it into Navisworks.
My resulting IFC file is 173mb which is exactly the sum of my four individual files.
If you want to try reduce the size of the IFC file, use the Solibri IFC Optimiser
By now, most people in the industry would have heard of flux.io, a spin-off from X (formerly Google X). Recently, flux.io updated their site extraction tool which pulls data from free open source datasets, Open Street Map and NASA. When combining with Dynamo, it couldn’t be any simpler to pull in topography information to your Revit model.
So how do we get started with this new-fangled technology?
Firstly, you’ll need a flux.io account. Once you have that sorted head on over to https://extractor.flux.io/ Once there you’ll be greeted with a Google map where you can search for your location. The map system works exactly as you expect it to. Simply drag and resize the selection box around the area you’re interested in and then select what you want from the menu on the top right of your screen.
When your data is ready, you can open it in flux and review the results. You simply drag and drop your keys from the column on the left into the space on the right. You can pan, zoom and rotate your way around the the 3D preview although as someone that works in Revit and Navisworks all day long I found that the controls aren’t the easiest.
Struggling with the navigation?
right mouse button = pan
left mouse button = orbit
scroll button = zoom
So all of this is great, but how do you get this into Revit? It’s actually incredibly simple.
You will need to have both Dynamo and the flux.io plugin suite installed, but once you have those installed you’re only a few minutes away from generating a Revit topography.
To get started you will need to login to flux.io through Revit and Dynamo, if it’s your first time using flux.io you might have to approve the connection between Revit/Dynamo and flux similar to what you would when sharing account information with online services and Google or Facebook.
Find the Flux package within Dynamo and first drop in the Flux Project node.
Once you have your flux project selected, it’s just three more nodes. Drop in the Receive from Flux node and select topographic mesh from the drop down. From there push the flux topography into Mesh.VertexPositions and then finally into Topography.ByPoints
Comparing the flux topography in red against the professional survey in blue, we can see that the flux topography is no replacement for a real survey, we are looking at a 5-8m difference between the survey and the flux data. Thankfully, surveyors aren’t going to be out of the job any time soon. This is the case on the example site in Sydney only though, other sites are far more accurate depending on where the source data is coming from. Remember the flux data is coming from a combination of sources including survey from satellites which leads to varying levels of accuracy. You shouldn’t rely on open source data like this as your sole source of information. You should be referring to relevant site survey information to verify the data against.
The inaccuracy of the data though doesn’t mean that the flux data is useless. Provided that you’re able to reference the flux data with known survey data and adjust to suit, this provides an excellent opportunity for using the flux data to fill in missing information surrounding your known survey and site. You then have opportunity to use the data for visualisation in concept stages or flyover presentations of large sites or precincts.
Over the last 3 months I’ve been busy working hard on coordinating the BIM for an existing infrastructure study of a hospital. The site consists of everything from heritage listed sandstone buildings constructed in the 1800s where for obvious reasons there are no existing drawings to a building that’s currently in the final stages of construction and has been fully designed and coordinated in BIM. The infrastructure study involved locating assets and services that interconnected between buildings within relatively accurate space within the BIM at LOD 200 as per the BIMForum guidelines.
When it came to the BIM, we decided to work with one building per MEP model which meant we had 28 MEP building models, 28 architecture building models that were created using a series of stacked DWG files and 4 site models. The obvious problem with so many models was going to be the consistency of the data and how we would go about verifying that data. Ensuring that we had all 60 models with the same information consistent information was a mountainous task that would have taken an exorbitant amount of hours to complete if manually reviewed, even if utilising BIMLink.
Enter stage left: Dynamo.
We used Dynamo far more extensively on this project than any that I have worked on before. Normally I’d work with little snippets to process small amounts of data and automate minor repetitive tasks, but this project was a real BIM project; there were no traditional drawing deliverable which actually seemed to genuinely baffle newcomers to the project. The deliverable was the federated model and more importantly the information contained within all the individually modeled elements. A few hours on one of my Sundays and I ended up with what you see below
That structured mess was able to verify photo file names and associated photo URLs, it verified asset codes were correct and if they weren’t, it generated new asset codes in the required format, it also checked and corrected all the information required to generate those new asset codes and finally probably the simplest part of it all, it filled the project information parameters for us. It was run on all MEP models, with another run on all the architecture models that we created.
Although we were able to automate a lot of really mundane processes, they were for the most part fairly project specific so even though the Dynamo script itself was invaluable to the project, other than the experience provided it doesn’t hold that much value for future projects. There was however one custom node that I put together for the population of Project Information parameters that will probably get used again and again on projects in the future.
Each input of the node is filled with a string for each individual parameter. In the project, the building name/number parameter relied on the levels within the model being named correctly for which there was another portion of the script that checked that the naming conventions for levels were followed.
The processing of the data itself is performed by Python code inside the custom node, after which the output showed the data that has been filled. You can either pick the custom node up from the MisterMEP Dynamo package or if you want to recreate this yourself the Python code is below
from RevitServices.Persistence import DocumentManager
from RevitServices.Transactions import TransactionManager
doc = DocumentManager.Instance.CurrentDBDocument
projinfo = doc.ProjectInformation
#The inputs to this node will be stored as a list in the IN variables.
OrgName = IN
OrgDesc = IN
BuildNumber = IN
ProjAuthor = IN
ProjDate = IN
ProjStat = IN
ProjClient = IN
ProjAddress = IN
ProjName = IN
ProjNumber = IN
projinfo.OrganizationName = OrgName
projinfo.OrganizationDescription = OrgDesc
projinfo.BuildingName = BuildNumber
projinfo.Author = ProjAuthor
projinfo.IssueDate = ProjDate
projinfo.Status = ProjStat
projinfo.ClientName = ProjClient
projinfo.Address = ProjAddress
projinfo.Name = ProjName
projinfo.Number = ProjNumber
elementlist = list()
OUT = elementlist
#OUT = "done"
In the last 6 to 18 months, the 3D collaboration and visualisation world has exploded with new software solutions to make life easier. The latest contender is from a startup based in Cambridge called Modelo. Modelo is a cloud based service that allows you to view 3D models that have been optimised for your web browser, giving you the ability to view models on almost any device with a data connection. Being a cloud based service, the recipient of your model doesn’t even need to own viewing software as the model is comes to you through a series of tube and viewed entirely on the line.
You can upload any Revit, SketchUp or Rhino file to Modelo, the original file is converted to an optimised format for viewing is generated. The original file is kept on the Modelo servers, however there is the option to delete the original file after the optimised file has been created.
Modelo is impressively fast for a browser based model viewing platform. You can share models with clients and the design team no matter where they’re located, allowing the team to annotate models and discuss through an online chat system.
It’s not collaboration in the league of Revizto, It’s collaboration made simple.
The commenting functionality is extremely well thought out, with ability to cut 3D sectional views or attach 2D images such as photos or plan views, comments can be kept private or flagged as ‘client ready’ so when you share your model on the client ready comments are displayed.
Camera locations are remembered in the comments as well, meaning that when a comment is selected, the model seamlessly flies around to the view the comment was created in so you see exactly what the person making the comment sees.
You can even adjust basic settings within the model, such as turning layers on and off (it uses Revit worksets) and even adjusting the location of the sun to change shadow detail in realtime. Of course with just simple sliders and the model not being located in any real space it’s a rough guide rather than daylight and shadowing simulation but the future potential is obviously there for Modelo.
Sharing a model is as easy as sharing a file in any cloud based hosting service, it’s as simple as a few clicks and share a link. When sharing a model you have options to restrict who can view the model and who can see model comments.
Sharing the model also has the ability to embed the model as an iframe, you may not realise this but iframes are not just something that can be embedded within websites, but with a plugin like iSpring or LiveWeb you can even embed the live models directly into a Powerpoint presentation.
The example above is a small part of a project that I’ve been working on for around 12 months now. The project involves a building structure on a bridge deck which has been constructed of spans of supertee structure, the bridge team working on the project were not working in Revit so that supertee structure that you’re seeing is actually a DWG file embedded within a Revit family which has come across quite nicely. To get the colours to come through, you will need to have materials applied to your modelled elements which in this instance I have applied at a piping system level.
On top of all the collaboration features, Modelo also gives you the ability to create a virtual reality model from a Revit model. Check out the transformation from Revit to VR in the video below, Eli from Modelo demonstrates just how easy it is, going from Revit to VR in 120 seconds.
All this is great, but what about this new fangled on the line technology? Won’t everything fall over when the data connection drops out? Well Modelo have this figured out, one the 3D model is loaded into your browser, Modelo can still be used to present regardless of if you have a data connection or not.
Finally, what does it cost? Well if you’re a personal user, it’s free. You’re limited to a single user, 5gb of storage and a maximum model upload size of 50mb. At the free tier you can still share and collaborate with others as well as create VR models. For small businesses of up to 10 users, Modelo will set you back $25 per user per month but you also get bumped 1tb of storage and model uploads of up to 1gb per model. If you need more than 10 licences you can contact Modelo for enterprise pricing as well.
I’ve only been using Modelo for a short while but I already love it. I actually prefer it to Autodesk’s web based offering. The simplicity and execution really hits the mark.
In Australia and likely elsewhere, a lot of people believe that BIM = Revit, however this is certainly not the case; there are many other software packages out there and if you need to collaborate with these software packages, chances are you will need to use IFC files.
IFC stands for Industry Foundation Class, it is a platform neutral file format that is not controlled by any of the software vendors.
For those that haven’t worked with IFC files before, there are a few things you need to keep in mind before you jump headlong into working on a project where IFC is used for collaboration. I will specifically be talking about my experiences working with IFC outputs from ArchiCAD.
You need time
Depending on the size of the project, the process of importing and IFC file into Revit could take a very long time. IFC imports can be anywhere from almost instant to a few days. You need to make sure that you clearly communicate this not just with your engineering team but with your architecture design team as well.
It’s very important to keep an open line of communication with your architect. Pick up the phone.. or more importantly answer the phone! Don’t let problems go unsolved sitting in someone’s inbox. 5 or 10 minutes spent on the phone with the architect might save hours of time for both of you down the track. They will be able to split large projects into smaller chunks or limit what elements are being exported from ArchiCAD so that the heavy lifting your hardware needs to perform is more manageable.
In all my testing, Revit appears to use 2 cores at most when importing IFC files. If you have a fast multi-core machine, you can set more than one file to import at a time. I highly recommend selecting each instance of Revit that you have open and setting the CPU affinity in task manager. This forces windows to spread the load across your CPU, maybe I’m imagining it but I found that if I didn’t do this all Revit processes seemed to share the same few cores; without doing this my 6 core/12 thread Xeon CPU sat at around 12% utilisation where as if I forced each instance of Revit to use certain cores I could push my CPU usage towards 80% utilisation. The problem you will face though is RAM. On some IFC files even 32gb is not enough.
For the Revit users on the team, push for the use of 2015 or newer; there really is no reason to dilly-dally in the comfort of older versions of the software. Revit 2015 brings to the table more efficient IFC imports through the Link IFC option. It is not a true IFC link, Revit actually converts the file to an RVT on the fly, in my testing using IFC Link instead of the open IFC method saves up to 60% on import times. The first iteration of architecture I received for the most recent project I’ve been working on took just on 3.5hrs to import a 286mb IFC file using Open -> Open IFC where as using Link IFC on the same file took only 24mins.
With some helpful splitting of models by the architecture team you can improve your workflows significantly.
I have worked on two hospital projects authored in ArchiCAD and we split the FF&E from the building fabric. The building fabric was always imported first and the FF&E flowed on afterward. We always requested up to date DWG exports of the floor plans that could be overlayed to reflect the FF&E as well.
You need DWG files
DWG files will play a very important role when working with IFC files, they will allow you to keep up to date quickly with furniture layouts but they’ll also help speed up your model. ArchiCAD seems to be able to handle higher polygon counts far better than Revit can which leads to glorious, highly detailed furniture models, trying to run a model with the 3D furniture models loaded in is going to slow you and your team down to a crawl. You don’t need to coordinate in 3D with a chair, but you may need to know where it is so you can place power and data outlets or other equipment. DWGs are the best way to do this.
Make sure you follow all the usual rules about DWG files. Keep them clean. Link them into their own host model, don’t insert. Never link or load them into your live model where possible. There is an article in the June 2015 AUGI magazine which goes into detail on best practice. Something they suggest that I’ve never tried before is to insert the DWG files into an *.RFA file and then stack the *.RFA files in the host model. If it means that it will be less problematic, go for it.
IFC files do not carry ceiling tile information
Don’t worry though, you asked for DWG files remember? Depending on what you want to show on your plans, you may need to link the DWG at certain heights so that they fall within the view range of your ceiling plan.
Again, follow best practice with linking in DWG files to a host model. Do not locate your RCP DWGs in your working model.
You need to understand coordinates
And you need to understand them well. When working with IFC exports from ArchiCAD, you will be working at the architect’s origin location, not shared coordinates. As much as it has probably been drummed into you to “always use shared coordinates” there is nothing actually wrong with using an origin to origin system.
In fact when importing an IFC file you don’t get a choice of how to bring the file in, Revit will automatically import at origin to origin. This poses a problem if you’re also collaborating with a civil team, but this is pretty easy to overcome, you just need to make sure that it’s part of your workflow.
You can’t host families to an imported IFC
Unless you’re going to draw hundreds or potentially thousands of reference plane, forget about using those face hosted families that you’ve become so accustomed to.
As you’re probably aware, all elements in Revit have a unique identifier, also known as an element id or global unique identifier (GUID). For whatever reason, Revit does not have a consistent way of applying GUIDs to imported IFC elements. This means that today the wall on ground level at grid intersection D5 might have an element id of 654321 and next week, it could be 751155 and as a result your hosted families will become orphaned. Or worse yet, maybe now a different wall from level 10 has picked up that original 654321 identifier and now your data outlets have been automatically moved to that new location by Revit.
At least this was the case in earlier versions of Revit. In newer versions of Revit you simply are not even allowed to host a family on an imported IFC face at all.
My advice would be to develop a suite of unhosted families. IFC is going to become more prevalent in the future especially as some governments around the world are mandating the use of IFC so you’re going to have to deal with it more in the future, you might as well be prepared.
Never ask the architect “Can’t you just do it in Revit?”
Unless you want the architect to ask “Well can’t you just do it in ArchiCAD MEP?” then don’t; and yes in case you were wondering there is an ArchiCAD MEP. Seriously, show a little respect. Sure Revit has a larger market share than ArchiCAD but that is not the point. BIM shouldn’t be and is not restricted to a single piece of software.
At the end of the day it’s not actually that hard to work with IFC files. Sure you have to think about things a little more but that’s OK because how boring would life be if every day was exactly the same? The most important thing if you’re on your way to higher levels of BIM is to make sure you get the rooms imported from the IFC file, that way you can still create MEP spaces and in turn perform all your MEP calcuations quite successfully. Otherwise if you’re still finding your feet in BIM and Revit is primarily a 3D documentation and coordination tool, working with IFC isn’t as hard as you might think it is.