Ryan Lenihan

Site Extraction with flux.io and Dynamo

By now, most people in the industry would have heard of flux.io, a spin-off from X (formerly Google X). Recently, flux.io updated their site extraction tool which pulls data from free open source datasets, Open Street Map and NASA. When combining with Dynamo, it couldn’t be any simpler to pull in topography information to your Revit model.

So how do we get started with this new-fangled technology?

Firstly, you’ll need a flux.io account. Once you have that sorted head on over to https://extractor.flux.io/ Once there you’ll be greeted with a Google map where you can search for your location. The map system works exactly as you expect it to. Simply drag and resize the selection box around the area you’re interested in and then select what you want from the menu on the top right of your screen.

When your data is ready, you can open it in flux and review the results. You simply drag and drop your keys from the column on the left into the space on the right. You can pan, zoom and rotate your way around the the 3D preview although as someone that works in Revit and Navisworks all day long I found that the controls aren’t the easiest.

Struggling with the navigation?

So all of this is great, but how do you get this into Revit? It’s actually incredibly simple.

You will need to have both Dynamo and the flux.io plugin suite installed, but once you have those installed you’re only a few minutes away from generating a Revit topography.

To get started you will need to login to flux.io through Revit and Dynamo, if it’s your first time using flux.io you might have to approve the connection between Revit/Dynamo and flux similar to what you would when sharing account information with online services and Google or Facebook.

Find the Flux package within Dynamo and first drop in the Flux Project node.

Once you have your flux project selected, it’s just three more nodes. Drop in the Receive from Flux node and select topographic mesh from the drop down. From there push the flux topography into Mesh.VertexPositions and then finally into Topography.ByPoints

Comparing the flux topography in red against the professional survey in blue, we can see that the flux topography is no replacement for a real survey, we are looking at a 5-8m difference between the survey and the flux data. Thankfully, surveyors aren’t going to be out of the job any time soon. This is the case on the example site in Sydney only though, other sites are far more accurate depending on where the source data is coming from. Remember the flux data is coming from a combination of sources including survey from satellites which leads to varying levels of accuracy. You shouldn’t rely on open source data like this as your sole source of information. You should be referring to relevant site survey information to verify the data against.

The inaccuracy of the data though doesn’t mean that the flux data is useless. Provided that you’re able to reference the flux data with known survey data and adjust to suit, this provides an excellent opportunity for using the flux data to fill in missing information surrounding your known survey and site. You then have opportunity to use the data for visualisation in concept stages or flyover presentations of large sites or precincts.

 

What to do when you have no option to save your Navisworks NWD

A quick tip for your Tuesday afternoon.

Have you ever had the problem where no matter how many times you click on the save button, you can’t see the option to save your Navisworks file as an NWD?

 

This likely means that you have an NWD file attached that has been set with the ‘May be re-saved’ option was unchecked when it was published. Have a check through your selection tree, find the offending NWD file and try to save again. All of a sudden saving as an NWD is an option again.

In this case, the fix is to get another copy of the file, this time with the ‘May be re-saved’ option checked, or better yet, if you will be working with future iterations of the file as your project progresses; ask for an NWC.

So you’re having trouble with masked text being transparent..

You have some grids. You have some text. The text is opaque but for some reason it still appears to be transparent. You’ve probably pulled almost all of your hair out trying to figure the problem, you’ve looked at all manner of view settings even if they’re not related to text or grids, but at the end of the day it’s really quite simple.

 

It is literally your draw order.

Even though your text type has an opaque background, if you create the text first and then draw another object over the top, the object drawn last will appear on top making it seem like the text is transparent. You can confirm this by creating new text and placing it over the same object which will now hide the object.

Rather than re-create every single text note, you can fix this problem quickly by selecting all the affected text, cutting it and then pasting it aligned to the view.

So lessons learnt?

Well in this instance, the root cause of the problem was rather than shift grids, the architect deleted them and created new ones. The copy monitored grids in the MEP model were then deleted and re-copy monitored, all of a sudden some of the grids appeared through the text as they were created after the text was.

 

Populate Node Identifiers to Pipework Using Dynamo

In MEP design, some workflows require that you populate node identifiers for each connection point of a fitting throughout a pipework system across onto the pipes themselves, which in turn gives a pipe a starting node number and an end node number.

Working through this process manually is an arduous task. Selecting each pipe fitting and each pipe throughout the project and entering the data piece by piece. Let’s hope you don’t have anywhere to be this week, cause if you’re doing it the manual way.. you’re working late!

The thing is, we don’t need to manually enter this data any more. We can automate it using Dynamo.

Jumping in though and taking a stab at the solution you might hit a brick wall. The problem that we’re posed with is how do we know what element is connected to what other element through Dynamo? I can see that the pipework is interconnected in my view, but how does Dynamo know?

The simple wizardry behind it all is a two step process using bounding boxes. With bounding boxes around your modeled elements you can see which fittings and pipes interconnect by the intersecting bounding box, from there you can push and pull the data from fittings to pipes and vice versa.

The first step is the creation of the bounding boxes. As you can see from the screenshot (click to see full screen) we’re taking the start and end points of the pipework and then we’re creating a 150mm square bounding box using the Cuboid.ByLengths node. To make life a little easier, we’ve used the Clockwork node Element.Location to get the endpoints of the pipes.

Keep a close eye on the nodes in the screenshot, for this to work properly you need a few of the nodes to be cross product. You can change a node to cross product by right clicking on the |, \, xxx icon. In the popup menu select Lacing -> Cross Product.

Because we are testing every piece of pipe against every pipe fitting, we need to remove the null results. In the example we’re testing 12 pipes against 12 fittings and when using cross product, this gives us a total of 144 results where only 24 of those results are meaningful, 12 from each list (start and end).

You can see in this snippet of the 144 results for pipe at index [0], it is intersecting with pipe fitting at index [0]. Although we can not see which fitting intersects with pipe at index [1], it does show that there is no order in which the results will come in.

As a result of this we need to make sure we are managing our lists correctly to ensure that we are pulling the correct information required to populate the information accurately.

We need to use some customs nodes from both the Spring Nodes and Lunchbox packages to help keep our lists in order.

NullIndexOf is based off the internal node IndexOf which will return the index of the elements in the list that in this case return true, and unlike the built in node it will return null instead of -1.

The next step was to use Manage.RemoveNulls node from the Lunchbox package to remove the nulls from our lists, as not every start/end point of the pipes will intersect with a fitting. eg the start of the pipe run in this example.

So now we have a list for the start and a list for the end of the pipes and the corresponding index[] of the fitting that the pipe intersects. The second list is a list of pipe ends that don’t intersect with a fitting, which we take care of in a later step.

Now that we have removed the nulls from our list we can grab the required data to fill in our fittingId parameter. It is important to remember that we cannot feed in the list of fittings in the order in which we used to create the bounding boxes. To make sure our data will remain correct, we use the Cleaned list from Manage.RemoveNulls and feed that into a List.GetItemAtIndex. This will organise our fittings into the order required to populate the data correctly.

Now our fittings are in the correct order we’re ready to push node id data from the fittings to the pipes. We do this using Element.GetParameterValueByName node to get the fittingId, to then push it to the Element.SetParameterByName node to the corresponding startNode and endNode on the pipes.

Before we pass through our pipe list to be set with the fittingId we need to make sure that we remove the pipes that don’t have a startNode or endNode value. We can achieve this using List.RemoveItemAtIndex and the list of indices from Manage.RemoveNulls.

With the press of a button dynamo can make a process that you could spend days entering manual data into a task you can complete within seconds.

 

Generating 3D Topography from a 2D DWG file

Being an MEP guy, topography is one of those things in Revit I don’t deal with too often but the general concept of working with topography isn’t tremendously difficult.

The two main options you have for automagically generating a topography surface are to either import from a DWG file that contains 3D surfaces – generally these would be topography triangles, or to generate from a list of points in a CSV file.

2016-12-05_11-33-45

The problem is, sometimes you won’t get a DWG file with 3D surfaces and you don’t have a CSV file either.. in fact the surveyor sent you a DWG file with surface points at a 0 elevation. Useless right? Well maybe not quite. Before you jump up and down telling the surveyor to “Do your job properly” and demanding the file in 3D you might be able to impress everyone with your skills and get quicker results.

Step 1 – Extracting the data from AutoCAD

Firstly you want to extract the usable information from AutoCAD. You need to have a really clear understanding of the data you’ve been given. Survey drawings could contain spot information for elements such as buildings, trees, fences and other non-surface information so you need to make sure that you’re grabbing only the information that is relevant to the topography surface. The last thing you want to be doing is telling everyone that the data is rubbish because “these random spots I decided to look at are 5m higher than the surrounding surface, the surveyor must be wrong”

Using your preferred workflow, strip out all the information from the DWG file not related to the surface. My method is to freeze off unrelated layers and then copy and paste what remains to a new DWG file

2016-12-05_12-00-23

This will more than likely leave behind linework that isn’t required to generate your topography. You can remove anything from the DWG that isn’t a point or text if you like, but leaving the information in the DWG won’t affect the process. If there is any MTEXT in the file, select it and explode it so it becomes regular text (DTEXT).

If you’re removing the redundant linework and only leaving points and text, once finished the before and after should look something like this

2016-12-05_12-02-34

The next step is to use a lisp routine to generate 3D points from the text that identifies the elevations throughout the drawing. You might already have a lisp routine that does the job, but if not a quick search on Google and the top result is this page from CADTutor which has the source for a lisp routine from the user Geobuilder. This is why we had to explode our MTEXT as the lisp routine only works on single line DTEXT.

Text to Points LISP

Copy and paste the lisp code into Notepad and save it as something you’ll remember. I’ve saved mine as txt2point.lsp

2016-12-05_12-13-53

In case you were unaware, the command for a lisp routine is defined after the code (defun C: so in the instance of Geobuilder’s lisp routine, the command is Convert_Text_to_Point

Load the lisp into AutoCAD and run the command. When I run the command, I simply selected all text.

2016-12-05_12-17-49

Once done, your DWG file will look something like this. To change the display of your points from dots to crosses, you can change the PDMODE variable. In the screenshot below my PDMODE is set to 2.

2016-12-05_12-19-22

The next step just as a is to remove all points that have a Z value that is equal to 0.0. Remember the information we received from the surveyor had points at 0,0 and we’ve now just generated a new set of points from the text labels. This means we’ll have duplicate points, some with a Z value of 0.0 and some at the correct Z value. Removing these points is simple to achieve by using the Quick Select tool.

2016-12-05_12-23-50

Once done, these will leave us with points at the roughly the correct spatial coordinates and elevations. The reason why I say ‘roughly correct’ is that the text insertion point may be offset from that of the original point, in the example I’m using the accuracy was to within 100mm.

The final step in AutoCAD is the DATAEXTRACTION tool. You need to make sure your DWG has been saved before you run the data extraction. The data extraction tool is fairly straight forward, however if you want step by step instructions, you can expand the section below.

Step by Step - Using the data extraction tool

Maniptulation of the Data in Excel

The next step is manipulating the data in Excel so that the points import in to Revit correctly. A lot of people misunderstand how Revit’s coordinate system works which often leads to problems  when working with data between AutoCAD and Revit. Even if your project is in shared coordinates and you can link your DWG file in by shared/world coordinates, imports such as this one where we are bringing topography in via CSV file will not drop in the topography in the correct location.

2016-12-05_16-08-52

In Revit your project base point is exactly that – your project base point. The 0,0 of your project. This means to process our data in Excel we need to subtract the coordinates of the project base point from the survey point we exported from AutoCAD.

2016-12-05_16-35-27

Once you’ve subtracted the base point coordinates from the coordinates of the survey points, the end result should look similar to the screenshot below which lists out all the points in coordinates that are relevative to the project base point.

2016-12-05_16-43-56

 

Finishing Up – Importing The Topography into Revit

The final step of the process is to import the CSV file itself. Again the import process itself is quite simple, but if you’re after step by step instructions you can expand the section below.

 

2016-12-06_13-26-59

 

Step by Step - Importing the CSV to Revit

Tidying up the output

To finish up, you’ll need to review the imported topography and tidy up the output. You can see here that in my example a few 0 z elevation points made it through into the CSV file.

2016-12-06_13-29-24

You can modify the topography using the edit surface tool, it’s up to you how you handle it, in this instance I chose to delete the 0 elevation points however you may want to fix up the elevations of the points to make sure all the gaps have been filled in and the topography is as complete as possible.

2016-12-06_13-51-08

Although it was a fairly lengthy how to, the entire process from start to finish should take no longer than 10-15mins which in most cases is much quicker and far less painful than going into battle with the surveyor over a bunch of 0 elevation points.

Using Dynamo to Generate Pipework Hangers

If you’re finding yourself modelling more detailed models that reflect proposed fabrication or constructed works, Cesare Caoduro over at the BIM and Others blog has a great step by step tutorial on how to use Dynamo to generate Unistrut style pipework supports.

If you’re mechanical or electrical it would be quite easy to adapt the first portion of the script to generate the same type of supports for ductwork and cable trays.

You can check out Cesare’s post here

Consistency is Key. Setting Project Info With Dynamo

Over the last 3 months I’ve been busy working hard on coordinating the BIM for an existing infrastructure study of a hospital. The site consists of everything from heritage listed sandstone buildings constructed in the 1800s where for obvious reasons there are no existing drawings to a building that’s currently in the final stages of construction and has been fully designed and coordinated in BIM. The infrastructure study involved locating assets and services that interconnected between buildings within relatively accurate space within the BIM at LOD 200 as per the BIMForum guidelines.

When it came to the BIM, we decided to work with one building per MEP model which meant we had 28 MEP building models, 28 architecture building models that were created using a series of stacked DWG files and 4 site models. The obvious problem with so many models was going to be the consistency of the data and how we would go about verifying that data. Ensuring that we had all 60 models with the same information consistent information was a mountainous task that would have taken an exorbitant amount of hours to complete if manually reviewed, even if utilising BIMLink.

Enter stage left: Dynamo.

We used Dynamo far more extensively on this project than any that I have worked on before. Normally I’d work with little snippets to process small amounts of data and automate minor repetitive tasks, but this project was a real BIM project; there were no traditional drawing deliverable which actually seemed to genuinely baffle newcomers to the project. The deliverable was the federated model and more importantly the information contained within all the individually modeled elements. A few hours on one of my Sundays and I ended up with what you see below

2016-09-16_14-30-00

That structured mess was able to verify photo file names and associated photo URLs, it verified asset codes were correct and if they weren’t, it generated new asset codes in the required format, it also checked and corrected all the information required to generate those new asset codes and finally probably the simplest part of it all, it filled the project information parameters for us. It was run on all MEP models, with another run on all the architecture models that we created.

Although we were able to automate a lot of really mundane processes, they were for the most part fairly project specific so even though the Dynamo script itself was invaluable to the project, other than the experience provided it doesn’t hold that much value for future projects. There was however one custom node that I put together for the population of Project Information parameters that will probably get used again and again on projects in the future.

2016-09-16_14-48-01

Each input of the node is filled with a string for each individual parameter. In the project, the building name/number parameter relied on the levels within the model being named correctly for which there was another portion of the script that checked that the naming conventions for levels were followed.

The processing of the data itself is performed by Python code inside the custom node, after which the output showed the data that has been filled. You can either pick the custom node up from the MisterMEP Dynamo package or if you want to recreate this yourself the Python code is below

 import clr
clr.AddReference("RevitServices")
import RevitServices
from RevitServices.Persistence import DocumentManager
from RevitServices.Transactions import TransactionManager

doc = DocumentManager.Instance.CurrentDBDocument
projinfo = doc.ProjectInformation
#The inputs to this node will be stored as a list in the IN variables.
OrgName = IN[0]
OrgDesc = IN[1]
BuildNumber = IN[2]
ProjAuthor = IN[3]
ProjDate = IN[4]
ProjStat = IN[5]
ProjClient = IN[6]
ProjAddress = IN[7]
ProjName = IN[8]
ProjNumber = IN[9]

TransactionManager.Instance.EnsureInTransaction(doc)


projinfo.OrganizationName = OrgName
projinfo.OrganizationDescription = OrgDesc
projinfo.BuildingName = BuildNumber
projinfo.Author = ProjAuthor
projinfo.IssueDate = ProjDate
projinfo.Status = ProjStat
projinfo.ClientName = ProjClient
projinfo.Address = ProjAddress
projinfo.Name = ProjName
projinfo.Number = ProjNumber

TransactionManager.Instance.TransactionTaskDone()

elementlist = list()
elementlist.append("DONE!")
elementlist.append(projinfo.OrganizationName)
elementlist.append(projinfo.OrganizationDescription)
elementlist.append(projinfo.BuildingName)
try:
elementlist.append(projinfo.Author)
except:
elementlist.append(list())
elementlist.append(projinfo.IssueDate)
elementlist.append(projinfo.Status)
elementlist.append(projinfo.ClientName)
elementlist.append(projinfo.Address)
elementlist.append(projinfo.Name)
elementlist.append(projinfo.Number)
OUT = elementlist
#OUT = "done" 

The Truth About Chips

Last month, AUGI magazine ran an article titled What is The Best Computer for Revit, although the article goes into what management, the ‘superstar’ and the user requires, to be frank some of the advice within was absolutely terrible. The final paragraph states:

openquote..whatever level of workstation The User and The Superstar gets, ensure the best Xeon with the most multicore processors..  

..and get the most RAM above 64GB you canclosequote

If you haven’t noticed yet, I’m a bit of a nerd. I don’t just work in BIM every day with a background in hydraulics design, I also have a background in IT. I used to work as an IT Manager in an engineering firm and I have studied a Cisco Certified Network Associate (CCNA) course. Even though I’m no longer working in an IT role, as BIM Manager part of my job is assisting the IT team testing and recommending hardware for various roles within the company. Understanding engineering hardware requirements and purchasing hardware that not only fits those requirements but also fits a reasonable budget has been part of my job for years.

The statement from the article that your company is “absolutely broke and doesn’t really care as much about lost $$$ and productivity” if they’re only giving your machine 32gb of RAM is laughable, but my biggest gripe with the article is the recommendation on the processor.

For the non-techies out there the processor, or the CPU, is the chip that is the beating heart of your computer. Unfortunately a lot of people (maybe your Mum) thinks that the CPU is “the box”. It’s not the box, it’s inside the box! There are two players in the consumer market for desktop and laptop CPU manufacture, Intel and AMD. Intel has been king of the hill for a number of years now in both the corporate and enthusiast markets, however AMD is working on a big comeback with the release of their new chips in the next 6 months.

Let’s see what was recommended for the processor from the AUGI article

cpu

openquoteProcessor
Xeon multicore processors. Get as many and as robust cores as your $ allows. Did someone say 44 cores and 512GB RAM? Yes, I just did. You can and someone should check processor ratings and speeds at CPUbenchmarkclosequote.net

Ignoring that no one seems to have proofread the article, lets have a look at this pie in the sky suggestion. At 44 cores we would be looking at the recently released Intel Xeon E5 2699v4.

For the tech guys that’s a 22 core, 44 thread Broadwell based CPU that runs a base frequency of 2.2ghz and a turbo frequency of 3.0ghz.

For the non-tech guys out there, it’s some of Intel’s latest technology released in the first quarter of 2016. It’s also a CPU that retails for AU$7526.74. That’s just the CPU. No RAM. No hard drive. No case. That price is actually the cheapest I can find the chip for in Australia. Admittedly there is a little bit of Australia tax included as prices in the USA start at around US$4644(AUD$6169) but that’s still the kind of budget you’d normally have for an entire high end Revit machine.

So with 22 cores/44 threads and AU$7500 less in your pocket, what exactly can Revit do? Let’s take it from Autodesk themselves, their knowledge network has a handy article titled Which function in Revit will take use of multiple processors. When they’re referring to “multiple processors” they’re talking about multiple cores or threads. A core is a physical CPU core on the chip where as the number of threads is how many simultaneous actions the CPU, these show up as another core on the CPU if the CPU supports technology called Simultaneous Multi Threading (SMT). People incorretly say that a CPU has 44 cores, when in fact it has 22 cores but it’s able to simultaneously process 44 threads. Intel calls their version of SMT Hyperthreading, where as currently AMD do not support SMT on their CPUs. AMD however will be introducing SMT on their soon to be released Zen chips.

From Autodesk:

Multi-threaded processes in Revit 2017:

  • Vector printing
  • Vector Export such as DWG and DWF
  • Autodesk Raytracer
  • Wall Join representation in plan and section views
  • Loading elements into memory. Reduces view open times.
  • Parallel computation of silhouette edges
  • Translation of high-level graphical representation of model elements
  • File Open and Save
  • Point cloud data display
  • DWF Export as individual sheets utilizes multiple processes.
  • Color fill calculations are processed in the background on another process.
  • Calculation of structural connection geometry in the background

If you’re rendering all day every day and that’s all that you’re doing, then the E5 2699v4 would serve you well. If you’re not rendering all day every day, you’d want to hope that you’re connecting a hell of a lot of walls, bucket loads of vector prints and not much else. Sure that’s an oversimplification, however when you sit down and think about what you really do every day in Revit, you can see that for the most part a single core is all that is needed. You could argue that Revit also uses multiple cores for opening and saving files so that’s something! You save to central every 15mins as you should! The reality of it is that when you’re opening and saving files you’re not limited by the CPU, you’re limited by the speed of your hard drive and more so, your network.

A great source of information for CPU performance is the Passmark CPU Benchmark site. Passmark gathers their information from benchmark tests run on hundreds of thousands of computers with many different configurations, this allows Passmark to be able to provide rather accurate results of how one system performs relative to another. It’s as simple as checking out the tables – the higher the number, the higher the relative performance of that CPU. A CPU with a score of 4000 will process roughly twice as much data as a CPU with a score of 2000. When comparing the best CPUs for Revit, the two that you want to look at are the Single Thread Performance charts for your day to day workload and the High End CPU charts for multi threaded work such as rendering.

Looking at single threaded performance for day to day work, that $7500+ CPU is going to perform about as good in Revit as a $76 Haswell based Pentium, the $76 chip runs single threaded applications at 98.6% of what the $7500 Xeon E5 2699v4 is capable of.

2016-09-12_14-16-48

It’s an extreme example and I’d never ever suggest buying the G3250T for a Revit machine, but it shows just how silly the suggestion of “the best Xeon with the most multicore processors” really is. In fact it’s a fairly poor recommendation for rendering as well as the CPU that is king of multi-thread performance, outperforming our 44 thread xeon by 20% only has a lowly 20 cores/40 threads!

2016-09-12_15-11-48

But being outpaced by a CPU with 2 less cores isn’t the only reason why it’s a bit of a silly chip for Revit..

So what actually is the best processor you can get for Revit?

As you’ve probably figured out by now, that actually depends on what your daily tasks consist of. Autodesk actually tell you what you should consider in their system requirements page on the knowledge base. When reading through the system requirements, you need to understand that the minimum requirements are just that. The minimum requirements to run Revit and not much else. If you’re modelling an average size house or a kitchen remodel, the minimum is probably fine. Anything beyond that, you need more power!

Jumping straight on up to Autodesk’s performance specification, they recommend

openquoteMulti-Core Intel® Xeon®, or i-Series processor or AMD® equivalent with SSE2 technology. Highest affordable CPU speed rating recommended.closequote

The next sentence states

openquoteAutodesk® Revit® software products will use multiple cores for many tasks, using up to 16 cores for near-photorealistic rendering operations.closequote

The emphasis on both those sentences are mine, but it’s quite important. The highest affordable CPU speed rating is really geared toward your everyday workload, this is the single thread performance. We already know from Autodesk’s multi-threading article that there is a small list of things that actually use multiple cores, but then the core count quite clearly highlights if you go out and buy a 44 core CPU for Revit, you’re going to be wasting 28 of those cores/threads. Anything over 8 cores/16 threads is simply wasted.

If it’s for your daily grind in Revit and your daily grind is like everyone else’s; churning out models, then you want the fastest single threaded processor you can get your hands on. At the moment, this is still the Haswell based Devil’s Canyon, otherwise known as the i7 4790K. It’s an AU$489 4 core/8 thread CPU which comes in at 150% the performance of the 44 core chip. If you want to stick to the Xeon brand, and there are good reasons to, the chips of choice are the Xeon E3 processors which are targeted toward single threaded applications. The fastest E3 currently on the market is the E3 1281v3 which retails for around AU$500 and has almost identical performance to the i7 4790k.

Multi-threaded tasks which let’s face it, the only one worth worrying about is rendering, it’s a different story. If rendering is your day to day job you need to look at a Xeon E5 chip. They’re targeted to multi-threaded workloads. Currently the E5 chips available are based on the Broadwell variant of the i7 lineup, at 8 cores/16 threads the pick of the bunch would be the E5 1680v4 and the E5 2667v4 but these still retail at AU$3018 and AU$3478 respectively for the CPU alone. You’re in luck though if you want the performance and aren’t tied into the big corporate PC companies, an ‘E’ series i7 chip might be just what you’re after. The 8 core / 16 thread i7 6900K CPU comes in at AU$1534 and matches the E5 1680 in performance in both single and mutli-thread applications.

Modelo Brings Presentations And Collaboration to Anywhere With a Data Connection

In the last 6 to 18 months, the 3D collaboration and visualisation world has exploded with new software solutions to make life easier. The latest contender is from a startup based in Cambridge called Modelo. Modelo is a cloud based service that allows you to view 3D models that have been optimised for your web browser, giving you the ability to view models on almost any device with a data connection. Being a cloud based service, the recipient of your model doesn’t even need to own viewing software as the model is comes to you through a series of tube and viewed entirely on the line.

You can upload any Revit, SketchUp or Rhino file to Modelo, the original file is converted to an optimised format for viewing is generated. The original file is kept on the Modelo servers, however there is the option to delete the original file after the optimised file has been created.

Modelo is impressively fast for a browser based model viewing platform. You can share models with clients and the design team no matter where they’re located, allowing the team to annotate models and discuss through an online chat system.

It’s not collaboration in the league of Revizto, It’s collaboration made simple.

2016-06-30_11-50-52

The commenting functionality is extremely well thought out, with ability to cut 3D sectional views or attach 2D images such as photos or plan views, comments can be kept private or flagged as ‘client ready’ so when you share your model on the client ready comments are displayed.

2016-06-30_12-04-11   2016-06-30_12-09-32

Camera locations are remembered in the comments as well, meaning that when a comment is selected, the model seamlessly flies around to the view the comment was created in so you see exactly what the person making the comment sees.

You can even adjust basic settings within the model, such as turning layers on and off (it uses Revit worksets) and even adjusting the location of the sun to change shadow detail in realtime. Of course with just simple sliders and the model not being located in any real space it’s a rough guide rather than daylight and shadowing simulation but the future potential is obviously there for Modelo.

2016-06-30_12-12-09  2016-06-30_12-11-54

Sharing a model is as easy as sharing a file in any cloud based hosting service, it’s as simple as a few clicks and share a link. When sharing a model you have options to restrict who can view the model and who can see model comments.

2016-06-30_11-51-59

Sharing the model also has the ability to embed the model as an iframe, you may not realise this but iframes are not just something that can be embedded within websites, but with a plugin like iSpring or LiveWeb you can even embed the live models directly into a Powerpoint presentation.

The example above is a small part of a project that I’ve been working on for around 12 months now. The project involves a building structure on a bridge deck which has been constructed of spans of supertee structure, the bridge team working on the project were not working in Revit so that supertee structure that you’re seeing is actually a DWG file embedded within a Revit family which has come across quite nicely. To get the colours to come through, you will need to have materials applied to your modelled elements which in this instance I have applied at a piping system level.

On top of all the collaboration features, Modelo also gives you the ability to create a virtual reality model from a Revit model. Check out the transformation from Revit to VR in the video below, Eli from Modelo demonstrates just how easy it is, going from Revit to VR in 120 seconds.

All this is great, but what about this new fangled on the line technology? Won’t everything fall over when the data connection drops out? Well Modelo have this figured out, one the 3D model is loaded into your browser, Modelo can still be used to present regardless of if you have a data connection or not.

Finally, what does it cost? Well if you’re a personal user, it’s free. You’re limited to a single user, 5gb of storage and a maximum model upload size of 50mb. At the free tier you can still share and collaborate with others as well as create VR models. For small businesses of up to 10 users, Modelo will set you back $25 per user per month but you also get bumped 1tb of storage and model uploads of up to 1gb per model. If you need more than 10 licences you can contact Modelo for enterprise pricing as well.

I’ve only been using Modelo for a short while but I already love it. I actually prefer it to Autodesk’s web based offering. The simplicity and execution really hits the mark.