Monday 1 August 2011

Considering Dual Monitors for Your Workstation

Are Two Monitors Better Than One?
The proverb 'two heads are better than one' is also applicable to the ideal number of monitors in any 3D workstation. 3D modelling can be a long and painstaking process. With increased accuracy and efficiency any architectural rendering workflow will become more profitable. Visualisation studios constantly strive for better accuracy and efficiency in their working pipeline and there are countless ways that this can be achieved but one of the quickest and easiest methods to begin the long journey towards visualisation perfection is increasing screen real estate.

Before flat panel LCD monitors became accessible to the everyday user dual monitor setups were less appealing due to the large amount of desk space used up by each CRT monitor. Today relatively large LCD monitors with higher screen resolutions are becoming increasingly more affordable, however, monitors are exponentially more expensive as the size of the individual monitor increases. So it is therefore more cost effective to consider two slightly smaller and less expensive LCD monitors rather than one single very large and very costly monitor. Due to the way that Microsoft Windows organizes the application windows on your desktop it is also arguably easier to control the setup of these windows through two slightly smaller monitors rather than in one very large monitor.

What Are the Practical Applications?
The practical applications of using two monitors are endless. Having the ability to work with various applications at any one given moment has enormous implications for the productivity of any visualiser. Having the capacity to refer to and compare previous renders, read emails, search for reference images on the internet or glance at plans and elevations without having to minimise, restore, expand or close and re-open any window whilst simultaneously still working in your 3D software package is incredibly useful.

Why Stop At Two?
Graphics cards that support more than two monitors are in most cases currently significantly more expensive than those that support only dual monitors. However, motherboards that support more than one graphics card are becoming more and more common due to the demand in the video games sector so dual graphics cards with three or four monitors is also a viable option depending on your budget. In many fields it is not uncommon for workstations to have six or more monitors.

Are Dual Monitors the Only Way of Increasing Screen Real Estate?
There are other methods of increasing screen real estate that will compliment the use of multiple monitors to further increase your efficiency. Software toolbar customization enables the user to access the tools that they most often use but in most cases also reduces viewport screen real estate. Becoming explicitly familiar with, and customizing, keyboard shortcut hotkeys is therefore an essential way to maintain viewport sizes whilst increasing the efficiency of your workflow. A combination of customized menus and toolbars as well as hotkeys is most preferable.

It is becoming progressively more cost effective for 3D visualisers to consider large multiple monitor setups on their workstations and the bottom line is that with multiple monitor setups comes increased productivity and therefore profitability. In order to have the largest viewports possible along with all of the desired customized toolbars, menus and software applications on your screen all at one time you will need to consider working with dual monitors. Currently most graphics cards support at least two monitors so provided you have the desk space dual monitors are a must for all serious 3D artists.


Find out more about architectural rendering.

Tuesday 26 July 2011

Purchasing a Workstation for an Efficient Modelling and Rendering Workflow

The computer industry moves at lightning speed so any reference to specific figures, parts and components may become out of date within six to twelve months of the writing of this article. However, the general approach to purchasing a new computer for the specific purpose of modeling and rendering will be relevant for many years to come. I can only really talk from my own experience with the computers and workstations that I've worked on but hopefully you can find some useful information that might help you along the way.

The most important thing to remember when putting together a computer is to double and triple check to make certain that all of the components that you are about to purchase are compatible with one another before buying them. If you are unsure ask somebody.

Questions to Consider
What are the specifications of the computer that you're currently working with and what would you like to gain from your new computer? What is your budget for your new computer? Would you be rendering large complex scenes? Would you be rendering animations or still images or both? etc..

Random Access Memory
The two main things that you will need to look out for when purchasing a computer with rendering in mind are RAM (Random Access Memory) and processing power. The more polygons and texture maps you have in your scene the more RAM the computer will need to have to be able work efficiently so the RAM's capacity is important. My workstation currently has 6Gb of DDR3 RAM 1600MHz installed (the faster the RAM the faster it will communicate with other components of your computer - but this will also make it more expensive). This is also where 64bit operating systems come into play (I’m currently using Windows 7 Pro 64bit). A 32bit operating system can only make use of about 3Gb of RAM regardless of how much is actually installed in the computer whereas 64bit operating systems can use as much RAM as you can afford to squeeze into the motherboard. Currently, between 6Gb and 12Gb is usually recommended for complex scenes. If you don't have enough RAM for the programs that you work with the computer will slow down to a crawl and it will become very difficult to continue working. When all of your RAM is used up your computer is far more likely to crash and in some cases you can lose many hours of work.

The Processor
The processor is in my opinion THE single most important component when considering putting a computer together. The more cores the better and the faster the clock speed the better. Also newer processors are generally better than older ones as they have newer microarchitecture. For example a processor with a clock speed of 2.6Ghz released in 2011 will generally speaking be faster (more efficient and more powerful) than a processor with a clock speed of 2.6Ghz released in 2007. My current workstation has an Intel Core i7 Extreme 965 3.2GHz. At the time of purchase this was the most powerful processor on the market. This is a quad core processor where each core has two threads. This means when I render a scene the processor renders with 8 buckets. The processor has the most impact on how fast you can render a scene as long as you have enough RAM. Which is why I would recommend buying the best processor available that your budget can afford. I would then build the rest of the computer around that processor. You'll then choose your motherboard based on the socket that your processor fits into. In my case the Core i7 fits into the LGA1366. LGA stands for Landing Grid Array. This is the actual part of the mother board where the processor sits.

Heatsinks
The heatsink is the big metal thing that sits over the top of the processor that cools it by drawing heat out of it through conduction. The fans in the case then blow cool air over and through the heatsink to cool it down. Generally speaking, stock heatsinks that ship with your processor do a pretty good job of keeping the processor cool enough to work efficiently and maintain a respectable lifetime. However, you can buy after market heatsinks that in some cases will be more efficient than stock heatsinks enabling more efficient use of the processor and a longer lifetime. I don't recommend overclocking expensive processors unless you have extensive experience in this area or you may find that you drastically reduce the lifetime of your processor or in extreme cases destroy it altogether.

Graphics Cards
The next thing is the graphics card. The graphics card is heavily involved with the modeling process because it helps render the viewports in real time. Apparently in future releases of 3D software packages graphics cards will have an impact on render times also. A powerful graphics card is good to have as it will improve your computers overall performance (provided you have enough RAM) during the modeling process. I currently work with an MSI Geforce GTX280, 1GB. I think there were probably slightly better graphics cards on the market at the time of purchase however this card does the job for me. There are ‘pro’ cards which are far more expensive than the consumer cards but I wouldn't bother with them as you won't notice a huge difference in performance for the extra money that you have to pay. If you want to use more than one monitor you'll need to make sure that your graphics card supports dual monitors. Most do these days.

Monitors
You can save money by buying cheaper monitors. I use two 22inch Samsung 2233BW LCD monitors. They were cheap and do the job very well. If you aren't using dual monitors at the moment I definitely recommend you consider using two monitors with your new machine (I'll post about this topic soon). It's easy to set up and is well worth the extra few hundred dollars for the second monitor if you have the desk space. If you dan't have the desk monitor arms can hold your monitors at the right height without taking up any desk space whatsoever. Get yourself a monitor calibrator and regularly calibrate your monitors to ensure that any adjustments that you make during your workflow are based on accurate colour information.

Hard Disks
The Hard disk that you choose should have enough space to store all of your files but also consider its speed. I have a 250Gb Western Digital Velociraptor. It runs at 10,000rpm. Most others run at 7,200rpm. The faster the hard disk the faster the computer will be able to transfer, save and read information off files etc. It’s always going to be a trade-off between speed, storage space and price.

Cases and Cooling
Buy a case with enough space to fit all of your components and enough air circulation (fans etc.) to keep all of your components cool. You can get water cooling but I wouldn't bother. It's a bit more expensive and requires more maintenance. In my opinion electricity and water don't mix.

Construction and Installation
Unless you have experience building the actual computer I would definitely recommend getting the guys at the store at which you purchase most of the components to actually build the computer as well. You should get a build warranty from the store (usually 12 or 24 months). I know people who have tried to do it themselves with not so good results. Having said that, if you have a solid background in computer hardware then putting the computer together yourself could save you a few dollars that could be put towards superior components.

Above all remember to make sure that all the components are compatible with each other before paying any hard earned money for them as there's nothing worse than purchasing incompatible components.

Keep asking the questions and doing the research on the latest technology and soon enough you'll know exactly the system that's right for you and your budget.


Find out more about architectural rendering.

Wednesday 20 July 2011

Frustrating Selection Cycle

Effective use of keyboard shortcuts significantly improves the efficiency of your workflow whilst reducing your reliance on toolbars and menus, and subsequently increases viewport screen real-estate. The default hotkeys are, however, not the most efficient configuration of shortcuts.

I find the cycle of selection tools with the default shortcut 'Q' incredibly frustrating. Surprisingly, in the end the solution to this issue was about as simple as they come. By default the keyboard shortcut 'Q' is set to 'Smart Select'. This causes the selection tool to cycle each time 'Q's pressed. In the 'Customize User Interface' dialogue box assign 'Q' to 'Select Object' and it'll change the cursor to select without subsequent uses of the shortcut causing selection tool cycling.


Find out more about architectural rendering.

Monday 18 July 2011

Utilizing Random Access Memory Efficiently for Faster Render Times

Random access memory (RAM) is the hardware component in your computer that programs use to store temporary information that they will need to refer back to throughout their regular use. It comes in different speeds, configurations and storage capacities.

Regardless of which 3D software package you use, making efficient use of random access memory could potentially increase the speed and quality of your renders, particularly if you are using all of your RAM during the use of your software. Although running out of RAM will be most noticeable during the rendering process, establishing an efficient workflow begins at the very start of the project with camera setup.

Lock in a Camera Viewpoint Before Beginning the Modelling Process
Camera setup is the first stage of any visualisation. It is important to remember that there is no need to model, in any great detail, areas that cannot be seen by the camera, which is why the viewpoint must be decided upon before beginning the modeling process. If you imagine an architectural visualisation as an example, the rear of the building being modeled will, in any professional studio, be modeled as quickly and with as little detail as possible in order to reduce load on resources (fewer polygons, vertices - vertex counts effect file size - and potentially texture maps). The impact on direct and indirect lighting needs to be considered.

Polygon Count
The number of polygons in your scene effect the amount of RAM being used during render time as well as throughout the modeling process. Make use of your software package's statistics function. Set it up to show the polygon count of both the entire scene as well as the current selection. This will keep polygon counts in the forefront of all decisions that you make when modeling. Consider how far an object is from the camera and optimize your objects accordingly. This is not to say that all objects should be made up of very few polygons. Objects close to the camera will need to be modeled in finer detail. It is important to keep this in mind in order to make best use of your time and your computer's RAM.

Texture Maps
Texture maps can be RAM hungry little monsters if not optimized appropriately. If your final animation is going to be rendered out at a resolution of 1024x576 pixels then you would reduce the size of all maps according to how large they appear in the animation. If, for example, a particular map covers one quarter of the frame during its largest point in the animation, then that map will not need to be any larger than 512x288 pixels. Although this sounds like half the size of the animation frame, it is actually one quarter because the image has reduced by half its height as well as half its width. Be sure to keep all of your full size maps because you are likely to need them later on down the track.

Modifiers
Modifiers also use up RAM so make sure that you delete any unneeded modifiers and collapse any modifiers that you will not need to adjust. It is good practice to save an iteration of your scene before collapsing modifiers so that you can always go back and make adjustments to parameters if you need to.

Restart Your Computer
Before rendering large scenes always restart your computer to refresh the RAM.

Close All Other Software
Even smaller programs use up certain amounts of RAM so make sure that all unnecessary programs are closed before hitting that render button.

The Visual Frame Buffer Window
The visual frame buffer window (VFB) is where you can see all the magic unfolding. It's the window that shows the rendering process in action. Every 3D artist will reluctantly admit that they've spent countless hours throughout their career watching buckets make pretty pictures in the visual frame buffer window. The VFB uses RAM so if you're running low on RAM and you don't need to see the VFB then disable it in the render settings.

Render Buckets
Render buckets are the brackets that you see bouncing around in the VFB. There is usually one bucket per core or thread involved in the rendering of that particular frame. For example, a quad core processor will show four buckets at render time. The size of the render buckets can be changed in most 3D software packages. If you are in dire need for a little bit of extra RAM consider reducing the size of the buckets slightly. Each bucket stores all of  the information that it can 'see' at any one moment into RAM. Larger buckets can 'see' more than smaller buckets at any given moment so by reducing the size of the buckets you will reduce the amount of information that is stored in RAM. Be aware though that this will increase the render time.

64bit Operating Systems
The next time you upgrade to a new computer consider the 64bit version of your operating system. 32bit operating systems are limited to the use of approximately 2-3Gb of RAM whereas 64bit operating systems are limited only by your budget. A little bit of extra RAM goes a long way. The downside to 64bit operating systems is that although some 32bit software (check with the software manufacturer) will work with a 64bit operating system they will not be able to make use of the extra RAM. This means that in order for you to make the most of the extra RAM the software that you use will also need to be 64bit. Ensure that all software and plugins that you use come in 64bit versions or at the very least have 32bit versions that are compatible with a 64bit operating system. 64bit software is gradually becoming increasingly more available.

Incorporating these practices into your daily workflow will hopefully make the process more fruitful and enjoyable. Happy rendering.


Find out more about architectural rendering.

Thursday 14 July 2011

Saving Disk Space by Reducing File Sizes in 3DS Max

Every 3D artist has experienced the annoyance of their computer workstation crashing. This is particularly frustrating when you haven't recently saved the file and you could potentially lose a whole day’s worth of work. It's therefore vital that you save your work regularly in iterations where you keep 'old' versions of work to refer back to if necessary. If, for example, your working file became corrupt a fast and easy way to recover most of your work is to refer back to the latest version of that file. Another advantage of keeping iterations of your work is that a client may change their mind about an aspect of the project, the floor covering in an architectural visualisation for example, and decide that they preferred the timber boards over the latest floor covering. In this situation bringing in the material from the old version of that file might only take a few seconds, whereas recreating that material from scratch could potentially take far longer. Keeping old versions of the file that you're working on will more than likely save you time in the long run. Maintaining an archive like this of old versions of work can very quickly eat up large amounts of disk space, especially when working with large, complex scenes. It's a good idea, because of this, to always keep file sizes in the back of your mind throughout your workflow.


Vertex Count

File sizes are heavily related to the vertex count of the scene. You can view the vertex count for a scene by enabling the statistics in the viewport. Reducing your vertex count will reduce the file size. It is possible to have two objects with the same number of polygons but differing vertex counts or vice versa. A box, for example, has 12 triangular polygons and 8 vertices while six planes arranged as a box shape together also have 12 polygons however you will notice that they have 24 vertices. This is because the polygons in the box share vertices while the six planes do not. Generally speaking though optimizing your scenes polygon count will also reduce vertex count and therefore file size.


Editable Mesh, Editable Poly or Primitive Object

It won't always be possible to keep assets as primitive objects but when the opportunity arises it is a good idea to leave objects in their primitive state. This can save large amounts of disk space, particularly when working with highly segmented objects. A box, for example, with 100 length, width and height segments saved in 3DS Max 2009 will have a file size of less than 200Kb. This is a relatively small file due to the fact that primitive objects work by inputting the parameters for that object into an algorithm. The file containing the box primitive has to store the values for the length, width and height parameters as well as their respective number of segments. This is only a small amount of information, hence a small file size. If we were to collapse that object down to an editable mesh the file size escalates to over 5Mb and if we were to collapse it down as an editable poly we end up with a file size of almost 9Mb. That's about 45 times the size of the file containing the original primitive object. The reason for these blowouts in file sizes when collapsing to either editable mesh or editable poly is that the file now has to store the X,Y and Z coordinates of each vertex. The editable poly file size is larger when compared with the file containing the editable mesh because it contains more options and parameters. When the added functionality of the editable poly isn't needed consider converting to an editable mesh instead.

Work with these concepts in mind during your daily workflow and you will see a reduction in general file sizes allowing you more efficient use of your disk space.


Find out more about architectural rendering.

Wednesday 13 July 2011

Customizing Toolbar Icons in 3DS Max

Customizing toolbars can be a quick and easy way to improve efficiency in your rendering workflow. It’s important, however, to be mindful of the reduction in viewport screen-real-estate. There are various useful modifiers like ‘TurboSmooth’, ‘Symmetry’, ‘Shell’ and ‘Sweep’ that don’t have icons assigned to them. These modifiers will show up in your customized toolbars as text buttons rather than icon buttons. The text is larger than the default icon size and therefore increases the size of the toolbar and subsequently reduces viewport screen-real-estate. You can assign default icons to them by right clicking on the offending text button in the toolbar and selecting ‘Edit Button Appearance’.

However, if you’d like to assign more intuitive icons to them first create your icon image in a program like Photoshop and save two copies of it, the first as a 24x24pixel bitmap file and the second as a 16x16pixel bitmap file. Using the TurboSmooth modifier as an example name them ‘TurboSmooth_Modifier_24i.bmp’ and ‘TurboSmooth_Modifier_16i.bmp’ respectively. You’ll then need to create a black and white ‘alpha’ copy of both of these. Where the white portion of the alpha image corresponds to what you want to be shown in your icon and the black portion of the alpha image makes the corresponding portion of your icon transparent. Name these alphas ‘TurboSmooth_Modifier_24a.bmp’ and ‘TurboSmooth_Modifier_16a.bmp’. Save the four image files in the following folder:

C:\Program Files\Autodesk\3ds Max Design 2012\UI\Icons

Next you’ll need to modify the MaxScript that controls the icon images in toolbars in Max. As always whenever modifying MaxScript save out a backup copy of the original file so that you can revert back to it if need be. Right click on the modifier text button in the toolbar and select ‘Edit Macro Script’. This will take you directly to the portion of script that controls the modifier that you right-clicked on. For 3DS Max Design on 64bit Windows 7 the Macro_Modifiers.mcr file can be found in:

C:\Program Files\Autodesk\3ds Max Design 2012\UI\MacroScripts\Macro_Modifiers.mcr

Under the line:
ButtonText:~TURBOSMOOTHMOD_BUTTONTEXT~

Add the following:
Icon:#("TurboSmooth_Modifier",1)

Save the file, close and re-open Max. Your custom icon button should appear in place of the text button in your toolbar. If you have trouble saving the Macro_Modifiers.mcr file you might need to adjust the security settings found in the file properties.


Find out more about Shadow Gap and architectural rendering.

Modifying the Clone Default Behaviour in 3DS Max Design

One of the differences between 3DS Max and 3DS Max Design is that Design uses 'instance' rather than 'copy' as the default behaviour when cloning.

This property can’t be adjusted in the preferences, configuration or UI settings. In order to change the default behaviour for cloning in 3DS Max Design you need to modify the MaxScript in the currentdefaults.ini configuration file.
To get access to this file you’ll first need to show the hidden files and folders in the Windows control panel. It can be found in C:\Users\YourUserName\AppData\Local\Autodesk\3dsMaxDesign\2012 – 64bit\enu\defaults\currentdefaults.ini
Before making any changes to this file save out a separate copy of the original just in case anything goes wrong.
Press Ctrl+F to search for the word ‘clone’. This will take you to the section of the Maxscript that deals with cloning. You should see:
; Transform Tool defaults
;  ObjectCloneType
;    0 = Copy
;    1 = Instance
;    2 = Reference
[TransformTool]
; Instance
ObjectCloneType=1

Change the portion:
; Instance
ObjectCloneType=1

to:
; Copy
ObjectCloneType=0

Save the file, close and re-open Max. You should now be able to clone with copy as the default cloning behaviour. If you’re having trouble saving the currentdefaults.ini file you may need to adjust the file security settings under properties by right-clicking on the file.
Don’t forget to re-hide hidden files and folders in control panel in order to prevent important system files from being accidentally moved or deleted.


Find out more about Shadow Gap architectural rendering.