Ryan Lenihan

Practical Dynamo – Moving Views on Sheets

For those of you that read through my previous post last week on creating sheets using Dynamo, you might have come to the end of the post only to realise that the views haven’t placed where you want them to be on the sheets.

For example, my sheet with the automatically placed view now looks like this

The first method I’m going to use nodes from both the Rhythm and Lunchbox packages which you can download from your package manager. Simply install the latest version.

 

The Rhythm package has some super useful tools for a whole range of different actions in Revit, but today we’re going to focus on the nodes that can help us manipulate the location of our views on the sheet.

To get started, we use the Sheet.GetViewportsAndViews node, we want to feed the sheets from our previous steps into this node and the node will give you the viewports, views and schedules as separate outputs. For this exercise, we’re only interested in the viewports. As always, while you’re reading through just click on the images to see them full size.

Next you need to use the Viewport.LocationData node from Rhythm. The outputs from this node are

bBox which returns the minimum (bottom left) and maximum points (top right) of the viewport bounding box.
boxCenter which returns the centre point of the viewport bounding box
boxOutline which returns the start and end points of each side of the viewport bounding box

For this example, we’re going to use the boxCenter option because we’re going to get tricky with it a bit later on. For those earlier that were wondering what the Use Levels option actually on the nodes, as you can see in my animation it changes the level of the list that we’re working with. Without the Use Levels option you would need to either use GetItemAtIndex or List.Deconstruct to get the data that you want to manipulate.

Next use the Points.DeconstructPoint node from the Lunchbox package, this will deconstruct your point into it’s individual X, Y & Z coordinates.

Now this is where we get too smart for our own good. I want my view to be placed in the middle of the available space on my titleblock. For my particular titleblock I know that the centre point is located at 378, 297 (yours may be different) and we already have the centre of the viewport from our Rhythm node.

To find how far we need to move the viewport, we need to subtract the view X centre value from the sheet X centre and the view Y centre value from the sheet Y centre. The code block is simply values I’ve chosen, you could think of them much like a parameter in a family.

The next step is to move the views. The vector gives the distance in X & Y coordinates that the view needs to be moved, the Vector.ByCoordinates and Element.MoveByVector nodes are both standard nodes within Dynamo.

And finally, the whole thing is tied together by pushing the viewport elements into the Element.MoveByVector node via a List.GetItemAtIndex, from which we’re taking the list elements at index 2.

Now sometimes when I run this script, I’ll see the following “Attempt to modify the model out side of transaction” error.

There is a simple solution to this. Just save your changes in Dynamo, close Dynamo and then re-open. Simply run the script again and everything will work!

 

An overview of our extension to the original graph from last week, I’ve highlighted the nodes from custom packages to make things a little easier as well, Captain BIMCAD actually called me out on last week’s example for not grouping my nodes!

Practical Dynamo – Generate Sheets from Excel

I was discussing Dynamo workflows with good old Captain BIMCAD the other night and we got to the topic of project setup.

Personally I don’t use Dynamo in my everyday project setup workflow, I use Ideate BIMLink, Omnia Scope Box Synchroniser and Sheet Duplicator but if you don’t have access to this software; especially BIMLink as it’s a bit pricey, Dynamo is definitely a viable option. Here’s how to get it done.

First we need to create a list of sheets in Excel with Name and Number information. Starting with a blank workbook in Excel, create a list with sheet numbers in column A and sheet names in column B.

From here we need to generate new sheets with this Excel data. Don’t forget the File.FromPath node, you can not feed the File Path node directly into the Excel.ReadFromFile node. Note that the name of the sheet in the Excel workbook is case sensitive. You can click on the image to view in full size


 

The next step is to remove the headers from our Excel file. They’re useful to us as it makes the Excel file more readable, however they need to be removed when used in Dynamo.

To achieve this we’re doing to use 2 nodes, List.FirstItem and List.RestOfItems.

 

Next we need to transpose our list so that we can feed in our sheet details into the sheet creation node. You can see once we run the list through the List.Transpose node that we now have a list of sheet numbers and a list of sheet names which sets us up for our next step.

Most of the magic happens at the next node which is the Sheet.ByNameNumberTitleBlockAndView node.

For the node to work, we need to input the sheet name, sheet number, the titleblock family which you can see how we achieve this in the next screenshot.

While you’ve been reading, I’ve taken it upon myself to generate some views in our model and add them to our original Excel file.

We can copy what we’ve already created in Dynamo for the sheet names and numbers and we simply take index 2 from the list, giving us the view names. Note that these will be case sensitive.

The next step is to actually find those views in the model to drop onto the sheets. We do this by creating a list of all the views within the model. Take the Categories node and select Views from the drop down, feed this into the All Elements of Category node and then finally feed this into an Element.GetParameterValueByName node. For the parameter name, we want to get the value for the View Name parameter.

From here we need to search the list of view names in Excel with the list of view names in the model. To do this, use an IndexOf node.

When you run this though, you’ll end up with a result of -1 instead of a list of indices. To fix this, change the level of list in the node. To do this, click on the right arrow on the element input of the node, select Use Levels and select @L1. Run the graph again and you’ll see the list of indices.

But what happens if you have a model where you don’t have the views setup yet? In our example we don’t have a view for the cover sheet or site plan yet which is why the view name is represented as null. You can see that the null view names give a -1 index result. If we feed this data into the Sheet.ByNameNumberTitleBlockAndView node as it is, it won’t create the sheets with the null views.

You can still use the same node, but there is a trick to it.

First, grab the Manage.ReplaceNulls node. Feed the list for views into the data section.

Next, create an empty drafting view, I’m just going to leave mine as the default Drafting 1. Feed the ReplaceWith input of the Manage.RemoveNulls node with the string Drafting 1.

Now when we search our views in the model, we’ll have the correct indices returned.

But hold on there a minute! We can’t drop drafting views on multiple sheets, how is this even going to work? To be honest, I’m not quite sure why but if you feed an empty drafting view into the Sheet.ByNameNumberTitleBlockAndView node it will generate an empty sheet. Whatever the reason, that’s a win for us!

Simply feed Manage.ReplaceNulls into the Sheet.ByNameNumberTitleBlockAndView node and we’re done!

 

If you’ve had automatic run selected, you’ll have a nice set of shiny new sheets created, otherwise simply click run and watch the magic happen.

Blink and you’ll miss it!

The end result. Click the image for the full resolution version.

Site Extraction with flux.io and Dynamo

By now, most people in the industry would have heard of flux.io, a spin-off from X (formerly Google X). Recently, flux.io updated their site extraction tool which pulls data from free open source datasets, Open Street Map and NASA. When combining with Dynamo, it couldn’t be any simpler to pull in topography information to your Revit model.

So how do we get started with this new-fangled technology?

Firstly, you’ll need a flux.io account. Once you have that sorted head on over to https://extractor.flux.io/ Once there you’ll be greeted with a Google map where you can search for your location. The map system works exactly as you expect it to. Simply drag and resize the selection box around the area you’re interested in and then select what you want from the menu on the top right of your screen.

When your data is ready, you can open it in flux and review the results. You simply drag and drop your keys from the column on the left into the space on the right. You can pan, zoom and rotate your way around the the 3D preview although as someone that works in Revit and Navisworks all day long I found that the controls aren’t the easiest.

[su_accordion][su_spoiler title=”Struggling with the navigation?”]right mouse button = pan
left mouse button = orbit
scroll button = zoom[/su_spoiler] [/su_accordion]

So all of this is great, but how do you get this into Revit? It’s actually incredibly simple.

You will need to have both Dynamo and the flux.io plugin suite installed, but once you have those installed you’re only a few minutes away from generating a Revit topography.

To get started you will need to login to flux.io through Revit and Dynamo, if it’s your first time using flux.io you might have to approve the connection between Revit/Dynamo and flux similar to what you would when sharing account information with online services and Google or Facebook.

Find the Flux package within Dynamo and first drop in the Flux Project node.

Once you have your flux project selected, it’s just three more nodes. Drop in the Receive from Flux node and select topographic mesh from the drop down. From there push the flux topography into Mesh.VertexPositions and then finally into Topography.ByPoints

Comparing the flux topography in red against the professional survey in blue, we can see that the flux topography is no replacement for a real survey, we are looking at a 5-8m difference between the survey and the flux data. Thankfully, surveyors aren’t going to be out of the job any time soon. This is the case on the example site in Sydney only though, other sites are far more accurate depending on where the source data is coming from. Remember the flux data is coming from a combination of sources including survey from satellites which leads to varying levels of accuracy. You shouldn’t rely on open source data like this as your sole source of information. You should be referring to relevant site survey information to verify the data against.

The inaccuracy of the data though doesn’t mean that the flux data is useless. Provided that you’re able to reference the flux data with known survey data and adjust to suit, this provides an excellent opportunity for using the flux data to fill in missing information surrounding your known survey and site. You then have opportunity to use the data for visualisation in concept stages or flyover presentations of large sites or precincts.

 

What to do when you have no option to save your Navisworks NWD

A quick tip for your Tuesday afternoon.

Have you ever had the problem where no matter how many times you click on the save button, you can’t see the option to save your Navisworks file as an NWD?

 

This likely means that you have an NWD file attached that has been set with the ‘May be re-saved’ option was unchecked when it was published. Have a check through your selection tree, find the offending NWD file and try to save again. All of a sudden saving as an NWD is an option again.

In this case, the fix is to get another copy of the file, this time with the ‘May be re-saved’ option checked, or better yet, if you will be working with future iterations of the file as your project progresses; ask for an NWC.

So you’re having trouble with masked text being transparent..

You have some grids. You have some text. The text is opaque but for some reason it still appears to be transparent. You’ve probably pulled almost all of your hair out trying to figure the problem, you’ve looked at all manner of view settings even if they’re not related to text or grids, but at the end of the day it’s really quite simple.

 

It is literally your draw order.

Even though your text type has an opaque background, if you create the text first and then draw another object over the top, the object drawn last will appear on top making it seem like the text is transparent. You can confirm this by creating new text and placing it over the same object which will now hide the object.

Rather than re-create every single text note, you can fix this problem quickly by selecting all the affected text, cutting it and then pasting it aligned to the view.

So lessons learnt?

Well in this instance, the root cause of the problem was rather than shift grids, the architect deleted them and created new ones. The copy monitored grids in the MEP model were then deleted and re-copy monitored, all of a sudden some of the grids appeared through the text as they were created after the text was.

 

Generating 3D Topography from a 2D DWG file

Being an MEP guy, topography is one of those things in Revit I don’t deal with too often but the general concept of working with topography isn’t tremendously difficult.

The two main options you have for automagically generating a topography surface are to either import from a DWG file that contains 3D surfaces – generally these would be topography triangles, or to generate from a list of points in a CSV file.

2016-12-05_11-33-45

The problem is, sometimes you won’t get a DWG file with 3D surfaces and you don’t have a CSV file either.. in fact the surveyor sent you a DWG file with surface points at a 0 elevation. Useless right? Well maybe not quite. Before you jump up and down telling the surveyor to “Do your job properly” and demanding the file in 3D you might be able to impress everyone with your skills and get quicker results.

Step 1 – Extracting the data from AutoCAD

Firstly you want to extract the usable information from AutoCAD. You need to have a really clear understanding of the data you’ve been given. Survey drawings could contain spot information for elements such as buildings, trees, fences and other non-surface information so you need to make sure that you’re grabbing only the information that is relevant to the topography surface. The last thing you want to be doing is telling everyone that the data is rubbish because “these random spots I decided to look at are 5m higher than the surrounding surface, the surveyor must be wrong”

Using your preferred workflow, strip out all the information from the DWG file not related to the surface. My method is to freeze off unrelated layers and then copy and paste what remains to a new DWG file

2016-12-05_12-00-23

This will more than likely leave behind linework that isn’t required to generate your topography. You can remove anything from the DWG that isn’t a point or text if you like, but leaving the information in the DWG won’t affect the process. If there is any MTEXT in the file, select it and explode it so it becomes regular text (DTEXT).

If you’re removing the redundant linework and only leaving points and text, once finished the before and after should look something like this

2016-12-05_12-02-34

The next step is to use a lisp routine to generate 3D points from the text that identifies the elevations throughout the drawing. You might already have a lisp routine that does the job, but if not a quick search on Google and the top result is this page from CADTutor which has the source for a lisp routine from the user Geobuilder. This is why we had to explode our MTEXT as the lisp routine only works on single line DTEXT.

[su_spoiler title=”Text to Points LISP” style=”fancy”]

(defun C:Convert_Text_to_Point (/ ss Z_value temp koord)
  (if (setq ss (ssget "_:L" '((0 . "Text"))))
    (progn
      (initget "Koord Value")
      (setq
	Z_value	(getkword "\nTake Z from [Koord/Value]? <Value>:")
	Z_value	(if Z_value
		  Z_value
		  "Value"
		)
	ss	(vl-remove-if-not
		  '(lambda (x) (= (type x) 'ENAME))
		  (mapcar 'cadr (ssnamex ss))
		)
      )
      (foreach item ss
	(setq temp  (entget item)
	      koord (cdr (assoc 10 temp))
	      koord (if	(eq Z_value "Value")
		      (list (car koord)
			    (cadr koord)
			    (atof (cdr (assoc 1 temp)))
		      )
		      koord
		    )
	)
	(entdel item)
	(entmakex
	  (list
	    '(0 . "POINT")
	    (cons 10 koord)
	  )
	)
      )
    )
  )
)

[/su_spoiler]

Copy and paste the lisp code into Notepad and save it as something you’ll remember. I’ve saved mine as txt2point.lsp

2016-12-05_12-13-53

In case you were unaware, the command for a lisp routine is defined after the code (defun C: so in the instance of Geobuilder’s lisp routine, the command is Convert_Text_to_Point

Load the lisp into AutoCAD and run the command. When I run the command, I simply selected all text.

2016-12-05_12-17-49

Once done, your DWG file will look something like this. To change the display of your points from dots to crosses, you can change the PDMODE variable. In the screenshot below my PDMODE is set to 2.

2016-12-05_12-19-22

The next step just as a is to remove all points that have a Z value that is equal to 0.0. Remember the information we received from the surveyor had points at 0,0 and we’ve now just generated a new set of points from the text labels. This means we’ll have duplicate points, some with a Z value of 0.0 and some at the correct Z value. Removing these points is simple to achieve by using the Quick Select tool.

2016-12-05_12-23-50

Once done, these will leave us with points at the roughly the correct spatial coordinates and elevations. The reason why I say ‘roughly correct’ is that the text insertion point may be offset from that of the original point, in the example I’m using the accuracy was to within 100mm.

The final step in AutoCAD is the DATAEXTRACTION tool. You need to make sure your DWG has been saved before you run the data extraction. The data extraction tool is fairly straight forward, however if you want step by step instructions, you can expand the section below.

[su_spoiler title=”Step by Step – Using the data extraction tool” style=”fancy”]
1 & 2. Select ‘Create a new data extraction’ and click next.

3. If you only need to extract data from one DWG file, click next. If you need to extract data from multiple files, add them to the list and then click next.

4 & 5 Make sure that you only have points selected, click next.

6 & 7. Uncheck all items except for Position x, Position Y and Position Z

8 & 9. Uncheck combine identical rows, show count column and show name column. Click next.


10 & 11. Select the output location and file type. You can select either CSV or XLS as we need to make some changes to the file in Excel before importing the file into Revit.

12. Click finish and the file will export.


[/su_spoiler]

Maniptulation of the Data in Excel

The next step is manipulating the data in Excel so that the points import in to Revit correctly. A lot of people misunderstand how Revit’s coordinate system works which often leads to problems  when working with data between AutoCAD and Revit. Even if your project is in shared coordinates and you can link your DWG file in by shared/world coordinates, imports such as this one where we are bringing topography in via CSV file will not drop in the topography in the correct location.

2016-12-05_16-08-52

In Revit your project base point is exactly that – your project base point. The 0,0 of your project. This means to process our data in Excel we need to subtract the coordinates of the project base point from the survey point we exported from AutoCAD.

2016-12-05_16-35-27

Once you’ve subtracted the base point coordinates from the coordinates of the survey points, the end result should look similar to the screenshot below which lists out all the points in coordinates that are relevative to the project base point.

2016-12-05_16-43-56

 

Finishing Up – Importing The Topography into Revit

The final step of the process is to import the CSV file itself. Again the import process itself is quite simple, but if you’re after step by step instructions you can expand the section below.

 

2016-12-06_13-26-59

 

[su_spoiler title=”Step by Step – Importing the CSV to Revit” style=”fancy”]
1. From the Massing & Site select ‘Create from import’ and then ‘Specify points file’ an open file dialogue box will open, select the CSV file that you created.

2. Select Select the correct units of the file.

3. Finish up the topography generation.


[/su_spoiler]

Tidying up the output

To finish up, you’ll need to review the imported topography and tidy up the output. You can see here that in my example a few 0 z elevation points made it through into the CSV file.

2016-12-06_13-29-24

You can modify the topography using the edit surface tool, it’s up to you how you handle it, in this instance I chose to delete the 0 elevation points however you may want to fix up the elevations of the points to make sure all the gaps have been filled in and the topography is as complete as possible.

2016-12-06_13-51-08

Although it was a fairly lengthy how to, the entire process from start to finish should take no longer than 10-15mins which in most cases is much quicker and far less painful than going into battle with the surveyor over a bunch of 0 elevation points.

Using Dynamo to Generate Pipework Hangers

If you’re finding yourself modelling more detailed models that reflect proposed fabrication or constructed works, Cesare Caoduro over at the BIM and Others blog has a great step by step tutorial on how to use Dynamo to generate Unistrut style pipework supports.

If you’re mechanical or electrical it would be quite easy to adapt the first portion of the script to generate the same type of supports for ductwork and cable trays.

You can check out Cesare’s post here

Consistency is Key. Setting Project Info With Dynamo

Over the last 3 months I’ve been busy working hard on coordinating the BIM for an existing infrastructure study of a hospital. The site consists of everything from heritage listed sandstone buildings constructed in the 1800s where for obvious reasons there are no existing drawings to a building that’s currently in the final stages of construction and has been fully designed and coordinated in BIM. The infrastructure study involved locating assets and services that interconnected between buildings within relatively accurate space within the BIM at LOD 200 as per the BIMForum guidelines.

When it came to the BIM, we decided to work with one building per MEP model which meant we had 28 MEP building models, 28 architecture building models that were created using a series of stacked DWG files and 4 site models. The obvious problem with so many models was going to be the consistency of the data and how we would go about verifying that data. Ensuring that we had all 60 models with the same information consistent information was a mountainous task that would have taken an exorbitant amount of hours to complete if manually reviewed, even if utilising BIMLink.

Enter stage left: Dynamo.

We used Dynamo far more extensively on this project than any that I have worked on before. Normally I’d work with little snippets to process small amounts of data and automate minor repetitive tasks, but this project was a real BIM project; there were no traditional drawing deliverable which actually seemed to genuinely baffle newcomers to the project. The deliverable was the federated model and more importantly the information contained within all the individually modeled elements. A few hours on one of my Sundays and I ended up with what you see below

2016-09-16_14-30-00

That structured mess was able to verify photo file names and associated photo URLs, it verified asset codes were correct and if they weren’t, it generated new asset codes in the required format, it also checked and corrected all the information required to generate those new asset codes and finally probably the simplest part of it all, it filled the project information parameters for us. It was run on all MEP models, with another run on all the architecture models that we created.

Although we were able to automate a lot of really mundane processes, they were for the most part fairly project specific so even though the Dynamo script itself was invaluable to the project, other than the experience provided it doesn’t hold that much value for future projects. There was however one custom node that I put together for the population of Project Information parameters that will probably get used again and again on projects in the future.

2016-09-16_14-48-01

Each input of the node is filled with a string for each individual parameter. In the project, the building name/number parameter relied on the levels within the model being named correctly for which there was another portion of the script that checked that the naming conventions for levels were followed.

The processing of the data itself is performed by Python code inside the custom node, after which the output showed the data that has been filled. You can either pick the custom node up from the MisterMEP Dynamo package or if you want to recreate this yourself the Python code is below

 import clr
clr.AddReference("RevitServices")
import RevitServices
from RevitServices.Persistence import DocumentManager
from RevitServices.Transactions import TransactionManager

doc = DocumentManager.Instance.CurrentDBDocument
projinfo = doc.ProjectInformation
#The inputs to this node will be stored as a list in the IN variables.
OrgName = IN[0]
OrgDesc = IN[1]
BuildNumber = IN[2]
ProjAuthor = IN[3]
ProjDate = IN[4]
ProjStat = IN[5]
ProjClient = IN[6]
ProjAddress = IN[7]
ProjName = IN[8]
ProjNumber = IN[9]

TransactionManager.Instance.EnsureInTransaction(doc)


projinfo.OrganizationName = OrgName
projinfo.OrganizationDescription = OrgDesc
projinfo.BuildingName = BuildNumber
projinfo.Author = ProjAuthor
projinfo.IssueDate = ProjDate
projinfo.Status = ProjStat
projinfo.ClientName = ProjClient
projinfo.Address = ProjAddress
projinfo.Name = ProjName
projinfo.Number = ProjNumber

TransactionManager.Instance.TransactionTaskDone()

elementlist = list()
elementlist.append("DONE!")
elementlist.append(projinfo.OrganizationName)
elementlist.append(projinfo.OrganizationDescription)
elementlist.append(projinfo.BuildingName)
try:
elementlist.append(projinfo.Author)
except:
elementlist.append(list())
elementlist.append(projinfo.IssueDate)
elementlist.append(projinfo.Status)
elementlist.append(projinfo.ClientName)
elementlist.append(projinfo.Address)
elementlist.append(projinfo.Name)
elementlist.append(projinfo.Number)
OUT = elementlist
#OUT = "done" 

The Truth About Chips

Last month, AUGI magazine ran an article titled What is The Best Computer for Revit, although the article goes into what management, the ‘superstar’ and the user requires, to be frank some of the advice within was absolutely terrible. The final paragraph states:

openquote..whatever level of workstation The User and The Superstar gets, ensure the best Xeon with the most multicore processors..  

..and get the most RAM above 64GB you canclosequote

If you haven’t noticed yet, I’m a bit of a nerd. I don’t just work in BIM every day with a background in hydraulics design, I also have a background in IT. I used to work as an IT Manager in an engineering firm and I have studied a Cisco Certified Network Associate (CCNA) course. Even though I’m no longer working in an IT role, as BIM Manager part of my job is assisting the IT team testing and recommending hardware for various roles within the company. Understanding engineering hardware requirements and purchasing hardware that not only fits those requirements but also fits a reasonable budget has been part of my job for years.

The statement from the article that your company is “absolutely broke and doesn’t really care as much about lost $$$ and productivity” if they’re only giving your machine 32gb of RAM is laughable, but my biggest gripe with the article is the recommendation on the processor.

For the non-techies out there the processor, or the CPU, is the chip that is the beating heart of your computer. Unfortunately a lot of people (maybe your Mum) thinks that the CPU is “the box”. It’s not the box, it’s inside the box! There are two players in the consumer market for desktop and laptop CPU manufacture, Intel and AMD. Intel has been king of the hill for a number of years now in both the corporate and enthusiast markets, however AMD is working on a big comeback with the release of their new chips in the next 6 months.

Let’s see what was recommended for the processor from the AUGI article

cpu

openquoteProcessor
Xeon multicore processors. Get as many and as robust cores as your $ allows. Did someone say 44 cores and 512GB RAM? Yes, I just did. You can and someone should check processor ratings and speeds at CPUbenchmarkclosequote.net

Ignoring that no one seems to have proofread the article, lets have a look at this pie in the sky suggestion. At 44 cores we would be looking at the recently released Intel Xeon E5 2699v4.

For the tech guys that’s a 22 core, 44 thread Broadwell based CPU that runs a base frequency of 2.2ghz and a turbo frequency of 3.0ghz.

For the non-tech guys out there, it’s some of Intel’s latest technology released in the first quarter of 2016. It’s also a CPU that retails for AU$7526.74. That’s just the CPU. No RAM. No hard drive. No case. That price is actually the cheapest I can find the chip for in Australia. Admittedly there is a little bit of Australia tax included as prices in the USA start at around US$4644(AUD$6169) but that’s still the kind of budget you’d normally have for an entire high end Revit machine.

So with 22 cores/44 threads and AU$7500 less in your pocket, what exactly can Revit do? Let’s take it from Autodesk themselves, their knowledge network has a handy article titled Which function in Revit will take use of multiple processors. When they’re referring to “multiple processors” they’re talking about multiple cores or threads. A core is a physical CPU core on the chip where as the number of threads is how many simultaneous actions the CPU, these show up as another core on the CPU if the CPU supports technology called Simultaneous Multi Threading (SMT). People incorretly say that a CPU has 44 cores, when in fact it has 22 cores but it’s able to simultaneously process 44 threads. Intel calls their version of SMT Hyperthreading, where as currently AMD do not support SMT on their CPUs. AMD however will be introducing SMT on their soon to be released Zen chips.

From Autodesk:

Multi-threaded processes in Revit 2017:

  • Vector printing
  • Vector Export such as DWG and DWF
  • Autodesk Raytracer
  • Wall Join representation in plan and section views
  • Loading elements into memory. Reduces view open times.
  • Parallel computation of silhouette edges
  • Translation of high-level graphical representation of model elements
  • File Open and Save
  • Point cloud data display
  • DWF Export as individual sheets utilizes multiple processes.
  • Color fill calculations are processed in the background on another process.
  • Calculation of structural connection geometry in the background

If you’re rendering all day every day and that’s all that you’re doing, then the E5 2699v4 would serve you well. If you’re not rendering all day every day, you’d want to hope that you’re connecting a hell of a lot of walls, bucket loads of vector prints and not much else. Sure that’s an oversimplification, however when you sit down and think about what you really do every day in Revit, you can see that for the most part a single core is all that is needed. You could argue that Revit also uses multiple cores for opening and saving files so that’s something! You save to central every 15mins as you should! The reality of it is that when you’re opening and saving files you’re not limited by the CPU, you’re limited by the speed of your hard drive and more so, your network.

A great source of information for CPU performance is the Passmark CPU Benchmark site. Passmark gathers their information from benchmark tests run on hundreds of thousands of computers with many different configurations, this allows Passmark to be able to provide rather accurate results of how one system performs relative to another. It’s as simple as checking out the tables – the higher the number, the higher the relative performance of that CPU. A CPU with a score of 4000 will process roughly twice as much data as a CPU with a score of 2000. When comparing the best CPUs for Revit, the two that you want to look at are the Single Thread Performance charts for your day to day workload and the High End CPU charts for multi threaded work such as rendering.

Looking at single threaded performance for day to day work, that $7500+ CPU is going to perform about as good in Revit as a $76 Haswell based Pentium, the $76 chip runs single threaded applications at 98.6% of what the $7500 Xeon E5 2699v4 is capable of.

2016-09-12_14-16-48

It’s an extreme example and I’d never ever suggest buying the G3250T for a Revit machine, but it shows just how silly the suggestion of “the best Xeon with the most multicore processors” really is. In fact it’s a fairly poor recommendation for rendering as well as the CPU that is king of multi-thread performance, outperforming our 44 thread xeon by 20% only has a lowly 20 cores/40 threads!

2016-09-12_15-11-48

But being outpaced by a CPU with 2 less cores isn’t the only reason why it’s a bit of a silly chip for Revit..

So what actually is the best processor you can get for Revit?

As you’ve probably figured out by now, that actually depends on what your daily tasks consist of. Autodesk actually tell you what you should consider in their system requirements page on the knowledge base. When reading through the system requirements, you need to understand that the minimum requirements are just that. The minimum requirements to run Revit and not much else. If you’re modelling an average size house or a kitchen remodel, the minimum is probably fine. Anything beyond that, you need more power!

Jumping straight on up to Autodesk’s performance specification, they recommend

openquoteMulti-Core Intel® Xeon®, or i-Series processor or AMD® equivalent with SSE2 technology. Highest affordable CPU speed rating recommended.closequote

The next sentence states

openquoteAutodesk® Revit® software products will use multiple cores for many tasks, using up to 16 cores for near-photorealistic rendering operations.closequote

The emphasis on both those sentences are mine, but it’s quite important. The highest affordable CPU speed rating is really geared toward your everyday workload, this is the single thread performance. We already know from Autodesk’s multi-threading article that there is a small list of things that actually use multiple cores, but then the core count quite clearly highlights if you go out and buy a 44 core CPU for Revit, you’re going to be wasting 28 of those cores/threads. Anything over 8 cores/16 threads is simply wasted.

If it’s for your daily grind in Revit and your daily grind is like everyone else’s; churning out models, then you want the fastest single threaded processor you can get your hands on. At the moment, this is still the Haswell based Devil’s Canyon, otherwise known as the i7 4790K. It’s an AU$489 4 core/8 thread CPU which comes in at 150% the performance of the 44 core chip. If you want to stick to the Xeon brand, and there are good reasons to, the chips of choice are the Xeon E3 processors which are targeted toward single threaded applications. The fastest E3 currently on the market is the E3 1281v3 which retails for around AU$500 and has almost identical performance to the i7 4790k.

Multi-threaded tasks which let’s face it, the only one worth worrying about is rendering, it’s a different story. If rendering is your day to day job you need to look at a Xeon E5 chip. They’re targeted to multi-threaded workloads. Currently the E5 chips available are based on the Broadwell variant of the i7 lineup, at 8 cores/16 threads the pick of the bunch would be the E5 1680v4 and the E5 2667v4 but these still retail at AU$3018 and AU$3478 respectively for the CPU alone. You’re in luck though if you want the performance and aren’t tied into the big corporate PC companies, an ‘E’ series i7 chip might be just what you’re after. The 8 core / 16 thread i7 6900K CPU comes in at AU$1534 and matches the E5 1680 in performance in both single and mutli-thread applications.