revit

Tagging Invert Levels

Over the years, I’ve seen a lot of err.. solutions for tagging invert levels. From adding shared parameters to your pipe families through to using Dynamo and everything in between.

Ignoring the fact that you can’t add parameters to system families like duct and pipework, there is a far simpler way to get what you’re after.

Ever heard of the spot elevation tool?

The key is in the settings that you use.

The settings that I use in my templates are as follows, the settings to change are highlighted below

In the units format dialogue, change the your settings to match the following

And of course, don’t forget to select the bottom elevation!

Importing DWGs to Revit While Still Displaying MTEXT Correctly

For those of you that still heavily rely on DWG based details and schematics that are bought into Revit from AutoCAD, you might have experienced issues maintaining the correct alignment of MTEXT once imported into Revit.

There is actually quite a simple solution, and no it’s not exploding the text.

In AutoCAD, MTEXT has grips that define the bounds of the MTEXT element.

The location of the grip indicates that the MTEXT bounding box doesn’t actually fully enclose the text contained within it. When the DWG is imported into Revit the text is displayed differently, it gives a completely unexpected result by placing the MTEXT word on the same line as TRUNCATED

If we modify the bounding box in AutoCAD to not truncate any of the text with the bounding box like so

And once we reload the DWG file into Revit and the text will display correctly.

 

Practical Dynamo – Moving Views Based on Another View

Okay, so we’re on a roll with practical Dynamo usage. Last week we looked at placing views centrally on our sheets, but what if you didn’t want the view centrally placed? What if you wanted views placed in the same location on all sheets maybe in the top left of the page?

As always with Dyanmo, there is a solution for that. Again we’re going to be using the Rhythm custom node package to get the work done. This method requires one sheet to be used as a template that all the following sheets are based on.

In our example this time around, where we want the view located is in the top left (shown on the left) but by default Dynamo places our views in the bottom left (shown on the right)

 

This workflow can be easily integrated into our previous graph where we created new sheets in Dynamo using Excel however for this example we’re going to create a standalone graph. For this example though, it’s assumed that this time around though that you already have all the sheets and views required created.

 

First we start by taking all of our sheets, we do this simply by using Categories and then All Elements of Category, after that we get into our Rhythm nodes.

First we get a list of all of the viewports on our sheets using the Sheet.GetViewportsAndViews node. Run the list through a List.Clean node to remove the empty list entries. This leaves us with just the viewport entries.

Meanwhile, we also need to get the viewport from our template sheet. In this instance our template sheet will be drawing H101 which we’ll select using the Sheets node and then we’ll feed that node into the Sheet.GetViewportsAndViews node which are both from the Rhythm package.

And finally we feed our data lists into the Viewport.SetLocationBasedOnOther node, which again is from Rhythm. It’s as simple as that.

Hit the run button and watch the magic happen.

Practical Dynamo – Moving Views on Sheets

For those of you that read through my previous post last week on creating sheets using Dynamo, you might have come to the end of the post only to realise that the views haven’t placed where you want them to be on the sheets.

For example, my sheet with the automatically placed view now looks like this

The first method I’m going to use nodes from both the Rhythm and Lunchbox packages which you can download from your package manager. Simply install the latest version.

 

The Rhythm package has some super useful tools for a whole range of different actions in Revit, but today we’re going to focus on the nodes that can help us manipulate the location of our views on the sheet.

To get started, we use the Sheet.GetViewportsAndViews node, we want to feed the sheets from our previous steps into this node and the node will give you the viewports, views and schedules as separate outputs. For this exercise, we’re only interested in the viewports. As always, while you’re reading through just click on the images to see them full size.

Next you need to use the Viewport.LocationData node from Rhythm. The outputs from this node are

bBox which returns the minimum (bottom left) and maximum points (top right) of the viewport bounding box.
boxCenter which returns the centre point of the viewport bounding box
boxOutline which returns the start and end points of each side of the viewport bounding box

For this example, we’re going to use the boxCenter option because we’re going to get tricky with it a bit later on. For those earlier that were wondering what the Use Levels option actually on the nodes, as you can see in my animation it changes the level of the list that we’re working with. Without the Use Levels option you would need to either use GetItemAtIndex or List.Deconstruct to get the data that you want to manipulate.

Next use the Points.DeconstructPoint node from the Lunchbox package, this will deconstruct your point into it’s individual X, Y & Z coordinates.

Now this is where we get too smart for our own good. I want my view to be placed in the middle of the available space on my titleblock. For my particular titleblock I know that the centre point is located at 378, 297 (yours may be different) and we already have the centre of the viewport from our Rhythm node.

To find how far we need to move the viewport, we need to subtract the view X centre value from the sheet X centre and the view Y centre value from the sheet Y centre. The code block is simply values I’ve chosen, you could think of them much like a parameter in a family.

The next step is to move the views. The vector gives the distance in X & Y coordinates that the view needs to be moved, the Vector.ByCoordinates and Element.MoveByVector nodes are both standard nodes within Dynamo.

And finally, the whole thing is tied together by pushing the viewport elements into the Element.MoveByVector node via a List.GetItemAtIndex, from which we’re taking the list elements at index 2.

Now sometimes when I run this script, I’ll see the following “Attempt to modify the model out side of transaction” error.

There is a simple solution to this. Just save your changes in Dynamo, close Dynamo and then re-open. Simply run the script again and everything will work!

 

An overview of our extension to the original graph from last week, I’ve highlighted the nodes from custom packages to make things a little easier as well, Captain BIMCAD actually called me out on last week’s example for not grouping my nodes!

Practical Dynamo – Generate Sheets from Excel

I was discussing Dynamo workflows with good old Captain BIMCAD the other night and we got to the topic of project setup.

Personally I don’t use Dynamo in my everyday project setup workflow, I use Ideate BIMLink, Omnia Scope Box Synchroniser and Sheet Duplicator but if you don’t have access to this software; especially BIMLink as it’s a bit pricey, Dynamo is definitely a viable option. Here’s how to get it done.

First we need to create a list of sheets in Excel with Name and Number information. Starting with a blank workbook in Excel, create a list with sheet numbers in column A and sheet names in column B.

From here we need to generate new sheets with this Excel data. Don’t forget the File.FromPath node, you can not feed the File Path node directly into the Excel.ReadFromFile node. Note that the name of the sheet in the Excel workbook is case sensitive. You can click on the image to view in full size


 

The next step is to remove the headers from our Excel file. They’re useful to us as it makes the Excel file more readable, however they need to be removed when used in Dynamo.

To achieve this we’re doing to use 2 nodes, List.FirstItem and List.RestOfItems.

 

Next we need to transpose our list so that we can feed in our sheet details into the sheet creation node. You can see once we run the list through the List.Transpose node that we now have a list of sheet numbers and a list of sheet names which sets us up for our next step.

Most of the magic happens at the next node which is the Sheet.ByNameNumberTitleBlockAndView node.

For the node to work, we need to input the sheet name, sheet number, the titleblock family which you can see how we achieve this in the next screenshot.

While you’ve been reading, I’ve taken it upon myself to generate some views in our model and add them to our original Excel file.

We can copy what we’ve already created in Dynamo for the sheet names and numbers and we simply take index 2 from the list, giving us the view names. Note that these will be case sensitive.

The next step is to actually find those views in the model to drop onto the sheets. We do this by creating a list of all the views within the model. Take the Categories node and select Views from the drop down, feed this into the All Elements of Category node and then finally feed this into an Element.GetParameterValueByName node. For the parameter name, we want to get the value for the View Name parameter.

From here we need to search the list of view names in Excel with the list of view names in the model. To do this, use an IndexOf node.

When you run this though, you’ll end up with a result of -1 instead of a list of indices. To fix this, change the level of list in the node. To do this, click on the right arrow on the element input of the node, select Use Levels and select @L1. Run the graph again and you’ll see the list of indices.

But what happens if you have a model where you don’t have the views setup yet? In our example we don’t have a view for the cover sheet or site plan yet which is why the view name is represented as null. You can see that the null view names give a -1 index result. If we feed this data into the Sheet.ByNameNumberTitleBlockAndView node as it is, it won’t create the sheets with the null views.

You can still use the same node, but there is a trick to it.

First, grab the Manage.ReplaceNulls node. Feed the list for views into the data section.

Next, create an empty drafting view, I’m just going to leave mine as the default Drafting 1. Feed the ReplaceWith input of the Manage.RemoveNulls node with the string Drafting 1.

Now when we search our views in the model, we’ll have the correct indices returned.

But hold on there a minute! We can’t drop drafting views on multiple sheets, how is this even going to work? To be honest, I’m not quite sure why but if you feed an empty drafting view into the Sheet.ByNameNumberTitleBlockAndView node it will generate an empty sheet. Whatever the reason, that’s a win for us!

Simply feed Manage.ReplaceNulls into the Sheet.ByNameNumberTitleBlockAndView node and we’re done!

 

If you’ve had automatic run selected, you’ll have a nice set of shiny new sheets created, otherwise simply click run and watch the magic happen.

Blink and you’ll miss it!

The end result. Click the image for the full resolution version.

Site Extraction with flux.io and Dynamo

By now, most people in the industry would have heard of flux.io, a spin-off from X (formerly Google X). Recently, flux.io updated their site extraction tool which pulls data from free open source datasets, Open Street Map and NASA. When combining with Dynamo, it couldn’t be any simpler to pull in topography information to your Revit model.

So how do we get started with this new-fangled technology?

Firstly, you’ll need a flux.io account. Once you have that sorted head on over to https://extractor.flux.io/ Once there you’ll be greeted with a Google map where you can search for your location. The map system works exactly as you expect it to. Simply drag and resize the selection box around the area you’re interested in and then select what you want from the menu on the top right of your screen.

When your data is ready, you can open it in flux and review the results. You simply drag and drop your keys from the column on the left into the space on the right. You can pan, zoom and rotate your way around the the 3D preview although as someone that works in Revit and Navisworks all day long I found that the controls aren’t the easiest.

Struggling with the navigation?
right mouse button = pan
left mouse button = orbit
scroll button = zoom

So all of this is great, but how do you get this into Revit? It’s actually incredibly simple.

You will need to have both Dynamo and the flux.io plugin suite installed, but once you have those installed you’re only a few minutes away from generating a Revit topography.

To get started you will need to login to flux.io through Revit and Dynamo, if it’s your first time using flux.io you might have to approve the connection between Revit/Dynamo and flux similar to what you would when sharing account information with online services and Google or Facebook.

Find the Flux package within Dynamo and first drop in the Flux Project node.

Once you have your flux project selected, it’s just three more nodes. Drop in the Receive from Flux node and select topographic mesh from the drop down. From there push the flux topography into Mesh.VertexPositions and then finally into Topography.ByPoints

Comparing the flux topography in red against the professional survey in blue, we can see that the flux topography is no replacement for a real survey, we are looking at a 5-8m difference between the survey and the flux data. Thankfully, surveyors aren’t going to be out of the job any time soon. This is the case on the example site in Sydney only though, other sites are far more accurate depending on where the source data is coming from. Remember the flux data is coming from a combination of sources including survey from satellites which leads to varying levels of accuracy. You shouldn’t rely on open source data like this as your sole source of information. You should be referring to relevant site survey information to verify the data against.

The inaccuracy of the data though doesn’t mean that the flux data is useless. Provided that you’re able to reference the flux data with known survey data and adjust to suit, this provides an excellent opportunity for using the flux data to fill in missing information surrounding your known survey and site. You then have opportunity to use the data for visualisation in concept stages or flyover presentations of large sites or precincts.

 

Using Dynamo to Generate Pipework Hangers

If you’re finding yourself modelling more detailed models that reflect proposed fabrication or constructed works, Cesare Caoduro over at the BIM and Others blog has a great step by step tutorial on how to use Dynamo to generate Unistrut style pipework supports.

If you’re mechanical or electrical it would be quite easy to adapt the first portion of the script to generate the same type of supports for ductwork and cable trays.

You can check out Cesare’s post here

Consistency is Key. Setting Project Info With Dynamo

Over the last 3 months I’ve been busy working hard on coordinating the BIM for an existing infrastructure study of a hospital. The site consists of everything from heritage listed sandstone buildings constructed in the 1800s where for obvious reasons there are no existing drawings to a building that’s currently in the final stages of construction and has been fully designed and coordinated in BIM. The infrastructure study involved locating assets and services that interconnected between buildings within relatively accurate space within the BIM at LOD 200 as per the BIMForum guidelines.

When it came to the BIM, we decided to work with one building per MEP model which meant we had 28 MEP building models, 28 architecture building models that were created using a series of stacked DWG files and 4 site models. The obvious problem with so many models was going to be the consistency of the data and how we would go about verifying that data. Ensuring that we had all 60 models with the same information consistent information was a mountainous task that would have taken an exorbitant amount of hours to complete if manually reviewed, even if utilising BIMLink.

Enter stage left: Dynamo.

We used Dynamo far more extensively on this project than any that I have worked on before. Normally I’d work with little snippets to process small amounts of data and automate minor repetitive tasks, but this project was a real BIM project; there were no traditional drawing deliverable which actually seemed to genuinely baffle newcomers to the project. The deliverable was the federated model and more importantly the information contained within all the individually modeled elements. A few hours on one of my Sundays and I ended up with what you see below

2016-09-16_14-30-00

That structured mess was able to verify photo file names and associated photo URLs, it verified asset codes were correct and if they weren’t, it generated new asset codes in the required format, it also checked and corrected all the information required to generate those new asset codes and finally probably the simplest part of it all, it filled the project information parameters for us. It was run on all MEP models, with another run on all the architecture models that we created.

Although we were able to automate a lot of really mundane processes, they were for the most part fairly project specific so even though the Dynamo script itself was invaluable to the project, other than the experience provided it doesn’t hold that much value for future projects. There was however one custom node that I put together for the population of Project Information parameters that will probably get used again and again on projects in the future.

2016-09-16_14-48-01

Each input of the node is filled with a string for each individual parameter. In the project, the building name/number parameter relied on the levels within the model being named correctly for which there was another portion of the script that checked that the naming conventions for levels were followed.

The processing of the data itself is performed by Python code inside the custom node, after which the output showed the data that has been filled. You can either pick the custom node up from the MisterMEP Dynamo package or if you want to recreate this yourself the Python code is below

 import clr
clr.AddReference("RevitServices")
import RevitServices
from RevitServices.Persistence import DocumentManager
from RevitServices.Transactions import TransactionManager

doc = DocumentManager.Instance.CurrentDBDocument
projinfo = doc.ProjectInformation
#The inputs to this node will be stored as a list in the IN variables.
OrgName = IN[0]
OrgDesc = IN[1]
BuildNumber = IN[2]
ProjAuthor = IN[3]
ProjDate = IN[4]
ProjStat = IN[5]
ProjClient = IN[6]
ProjAddress = IN[7]
ProjName = IN[8]
ProjNumber = IN[9]

TransactionManager.Instance.EnsureInTransaction(doc)


projinfo.OrganizationName = OrgName
projinfo.OrganizationDescription = OrgDesc
projinfo.BuildingName = BuildNumber
projinfo.Author = ProjAuthor
projinfo.IssueDate = ProjDate
projinfo.Status = ProjStat
projinfo.ClientName = ProjClient
projinfo.Address = ProjAddress
projinfo.Name = ProjName
projinfo.Number = ProjNumber

TransactionManager.Instance.TransactionTaskDone()

elementlist = list()
elementlist.append("DONE!")
elementlist.append(projinfo.OrganizationName)
elementlist.append(projinfo.OrganizationDescription)
elementlist.append(projinfo.BuildingName)
try:
elementlist.append(projinfo.Author)
except:
elementlist.append(list())
elementlist.append(projinfo.IssueDate)
elementlist.append(projinfo.Status)
elementlist.append(projinfo.ClientName)
elementlist.append(projinfo.Address)
elementlist.append(projinfo.Name)
elementlist.append(projinfo.Number)
OUT = elementlist
#OUT = "done" 

Modelo Brings Presentations And Collaboration to Anywhere With a Data Connection

In the last 6 to 18 months, the 3D collaboration and visualisation world has exploded with new software solutions to make life easier. The latest contender is from a startup based in Cambridge called Modelo. Modelo is a cloud based service that allows you to view 3D models that have been optimised for your web browser, giving you the ability to view models on almost any device with a data connection. Being a cloud based service, the recipient of your model doesn’t even need to own viewing software as the model is comes to you through a series of tube and viewed entirely on the line.

You can upload any Revit, SketchUp or Rhino file to Modelo, the original file is converted to an optimised format for viewing is generated. The original file is kept on the Modelo servers, however there is the option to delete the original file after the optimised file has been created.

Modelo is impressively fast for a browser based model viewing platform. You can share models with clients and the design team no matter where they’re located, allowing the team to annotate models and discuss through an online chat system.

It’s not collaboration in the league of Revizto, It’s collaboration made simple.

2016-06-30_11-50-52

The commenting functionality is extremely well thought out, with ability to cut 3D sectional views or attach 2D images such as photos or plan views, comments can be kept private or flagged as ‘client ready’ so when you share your model on the client ready comments are displayed.

2016-06-30_12-04-11   2016-06-30_12-09-32

Camera locations are remembered in the comments as well, meaning that when a comment is selected, the model seamlessly flies around to the view the comment was created in so you see exactly what the person making the comment sees.

You can even adjust basic settings within the model, such as turning layers on and off (it uses Revit worksets) and even adjusting the location of the sun to change shadow detail in realtime. Of course with just simple sliders and the model not being located in any real space it’s a rough guide rather than daylight and shadowing simulation but the future potential is obviously there for Modelo.

2016-06-30_12-12-09  2016-06-30_12-11-54

Sharing a model is as easy as sharing a file in any cloud based hosting service, it’s as simple as a few clicks and share a link. When sharing a model you have options to restrict who can view the model and who can see model comments.

2016-06-30_11-51-59

Sharing the model also has the ability to embed the model as an iframe, you may not realise this but iframes are not just something that can be embedded within websites, but with a plugin like iSpring or LiveWeb you can even embed the live models directly into a Powerpoint presentation.

The example above is a small part of a project that I’ve been working on for around 12 months now. The project involves a building structure on a bridge deck which has been constructed of spans of supertee structure, the bridge team working on the project were not working in Revit so that supertee structure that you’re seeing is actually a DWG file embedded within a Revit family which has come across quite nicely. To get the colours to come through, you will need to have materials applied to your modelled elements which in this instance I have applied at a piping system level.

On top of all the collaboration features, Modelo also gives you the ability to create a virtual reality model from a Revit model. Check out the transformation from Revit to VR in the video below, Eli from Modelo demonstrates just how easy it is, going from Revit to VR in 120 seconds.

All this is great, but what about this new fangled on the line technology? Won’t everything fall over when the data connection drops out? Well Modelo have this figured out, one the 3D model is loaded into your browser, Modelo can still be used to present regardless of if you have a data connection or not.

Finally, what does it cost? Well if you’re a personal user, it’s free. You’re limited to a single user, 5gb of storage and a maximum model upload size of 50mb. At the free tier you can still share and collaborate with others as well as create VR models. For small businesses of up to 10 users, Modelo will set you back $25 per user per month but you also get bumped 1tb of storage and model uploads of up to 1gb per model. If you need more than 10 licences you can contact Modelo for enterprise pricing as well.

I’ve only been using Modelo for a short while but I already love it. I actually prefer it to Autodesk’s web based offering. The simplicity and execution really hits the mark.

Need Bulk Family Upgrades? You’ve Got Options!

So, I’ve been asked maybe 4.. 5 times in the last week or so “How do I batch update Revit family files?” so for those that don’t know how, here is a run down of some of the options available to you

Bulk File Upgrader – US$99

Harry Mattison of Boost Your BIM has written a Bulk File Upgrader program which you can use to upgrade projects, families and templates in bulk. It costs US$99 and has a very simple to use graphical interface which basically consists of selecting the location of files in and files out.

It’s a pretty simple app, it’s built to do one thing and does it well. You can pick it up from the Autodesk App Exchange here

BIMWerx Bulk Upgrader – US$10

This is a relatively new file upgrade addin that was released just this last week.

The batch file upgrader will scan a user specified folder for project files and family files, upgrade them and save them in the same folder. All backup files are automatically removed afterwards in order not to clutter the folder that is being upgraded.

This application requires minimal input, and works great with multiple library file upgrades. You can grab it from the Autodesk App Store.

CADDaddy Tools – US$12.99

Another alternative is to think about what you want to do in reverse.

James LeVieux sells a great little addin called CADDaddy Tools, it doesn’t upgrade files, but it will export families to a folder location complete with sub-folders that reflect the element category. This is a fantastic solution if you keep all (or most) of your families in your base Revit template. Simply upgrade your Revit template or project and then use the CADDaddy Tools family exporter to export all your newly upgraded families.

You can pick and choose what categories are exported and away you go.

The best part is that CADDaddy  Tools is only US$12.99 for the latest version (2017) and gets cheaper for previous releases and for that price, you also get a few other handy tools as well. Once you’ve exported your families, if you have the Revit version as a suffix to your file names, there are a number of free file name utilities out there that you can use you rename your files to reflect the correct version.

Journal File – Free!

If you’re up for something ever so slightly more complicated than the other two options, you can use a Revit journal script. I’ve tested this to work in both Revit 2015 and 2016 and it will bulk upgrade files with ease and what’s better is you can do it for free!

First download this file and extract the contents to the root directory of your family library. I suggest you take a copy of your family library to make the changes to so you don’t lose the previous version of your families.

2016-06-22_14-43-22

Run the BatchUpgrade2016.bat file, this will start by cleaning out all of your old family backup files (i.e. family.0001.rfa) and then it will create a list named famlist_rfa.txt

Once you have the famlist_rfa.txt file, simply drag and drop the BatchUpgrade2016.txt file onto the Revit icon on the desktop. Make sure you drag it to the correct icon i.e. Revit 2016.

Now all that’s left to do is watch the magic!

adobepresenterupdate

13/10/16 – Script updated for Revit 2017 compatibility. Please re-download the file for the updated script.