Leica Cyclone 7.1.1: Sub-Dividing Large Point Clouds for Autodesk Applications – Exporting the Data

This workflow will show you how to export your sub-divided point cloud.
Hint: You can click on any image to see a larger version.

[wptabs style=”wpui-alma” mode=”vertical”] [wptabtitle] CREATING PIECES AND EXPORTING DXF’S[/wptabtitle]

[wptabcontent]V. Creating Pieces and Exporting DXF’s – Once the first two vertical RP’s have been placed, the first sectional piece will be copied into its own MS and exported.

A. In Piecing_Base, activate the top view (you may lock the rotation of the view to help create consistent fences > Viewpoint > View Lock > Rotation)

B. Create a fence around the first area marked by RP’s Vertical_001 and Vertical_002 > Copy fenced to new MS

1. Leave a slight overlap beyond the RP to allow later re-assembly of the exports

2. Confirm that nothing is selected before creating the fence; otherwise the entire cloud will copy

clip_image022[6]Figure 11 – Piece_001 is fenced and copied to a new MS, using the RP’s as guides; note the fence overlaps slightly beyond the RP, capturing redundant points between adjacent pieces for later re-assembly.

[/wptabcontent]

[wptabtitle] ACTIVATE SIDE OR FRONT VIEW[/wptabtitle]

[wptabcontent]

C. In the new MS > Activate a side or front view (you may need to unlock the view lock > Viewpoint > View Lock)

clip_image024[6]

Figure 12 – Front view of Piece_001 in a new MS; note all vertical RP’s are invisible, allowing the horizontal division (the green line/RP in the center) to be clearly seen

 

[/wptabcontent]

[wptabtitle] ACTIVATE RP DIALOGUE BOX[/wptabtitle] [wptabcontent]

D. Activate RP dialogue box > Turn off the visibility of the vertical RP’s > Turn on the visibility of the horizontal RP’s > You should now see the first piece of your data with RP’s to dividing it into more pieces (Suggested MS name Piece_001).

clip_image026[6]

Figure 13 – The top portion of Piece_001 is fenced and copied to a new MS; the green line in the center is RP Horiz_001 showing the division between the two levels

 

[/wptabcontent]

[wptabtitle] CREAT FENCE AND COPY TO NEW MS[/wptabtitle] [wptabcontent]E. Create a fence around the top piece > Copy fenced to new MS (Suggested MS name Piece 001_A) > Again, remember to leave a slight overlap beyond the RP

clip_image028[6]Figure 14 – Piece_001_A in its own MS, selected and being exported; note the overlap below the RP, creating some redundancy between adjacent scans

 [/wptabcontent]

[wptabtitle] EXPORT[/wptabtitle] [wptabcontent]

F. In new MS, select piece > Export > Confirm file type is DXF R12 (NOT 2D dxf – currently DXF R12 is the supported format for direct export from Cyclone) > Selected items > Export (Suggested export name Piece_001_A)

NOTE: If Cyclone immediately states “0 objects exported” either the file type is incorrect or the point cloud was not selected before the export command. Check your settings and selection.

G. The number of points is displayed as the export begins. Roughly, when exporting, 1,000,000 points equals a 1 megabyte DXF. Noting this value at the beginning of each export helps you confirm that the resulting imports will be manageable.

1. Even if you have evenly-sized pieces (based on physical measurements) the size of the data in the pieces varies greatly depending on the complexity within each piece. 5 square meters of measured space can have more points than 20 square meters of space depending on what is within that volume (ie: a complicated network of pipes versus an empty room). Taking this into account and watching the exporting figures are essential to figuring out how to divide up your specific project.

2. If the number of points is vastly smaller or larger than the desired file size, delete the MS for the piece > adjust the RP > Copy out the new piece and try again!

[/wptabcontent]

[wptabtitle] CYCLONE BUGS?[/wptabtitle] [wptabcontent]3. Cyclone bugs?

i. When the point cloud piece is copied to its own MS, it may be selected and the object info can be displayed. However, this info shows the number of points in the parent point cloud, not the piece. When the export begins, appears to be the first time the accurate number of points in the piece is displayed.

clip_image030[6]

ii. When you name the export file, it always prompts that this file exists and asks if you want to overwrite it; answer yes.

[/wptabcontent]

[wptabtitle] CLOSE EACH MODELSPACE[/wptabtitle] [wptabcontent]H. Close each MS > be sure to select CLOSE when prompted, DO NOT merge into the original MS or merge!!

clip_image032[6]

I. Repeat Steps D through G for each vertical division of Piece_001, naming each export and MS/MS View as you go (Piece_001_A, Piece_001_B, etc)

[/wptabcontent]

[wptabtitle] RENAME YOUR MODELSPACES[/wptabtitle] [wptabcontent]J. Before continuing, it is recommended that you go to Cyclone Navigator and rename each MS and each MS view as shown in Figure 15 to avoid confusion. After Piece_001 is completely divided and exported, return to Step IV -Section D and repeat the steps for dividing up the the next piece in the direction you are gridding (Piece_002) reducing it to its sub-divisions (Piece_002_A, Piece_002_B, etc)
clip_image034[6] clip_image035[6]

Figure 15 – A closer view of the file structure in Cyclone Navigator; the MS for each vertical sectional slice is named Piece_###. The MS and MS View for each horizontal piece (and the resulting .dxf exports) are named Piecing_###_A , B, etc.

 [/wptabcontent] [/wptabs]

Posted in Cyclone, Scanning, Software, Uncategorized, Workflows | Tagged , , , , ,

Leica Cyclone 7.1.1: Sub-Dividing Large Point Clouds for Autodesk Applications

This workflow will explain a method of sub-dividing registered, unified point cloud data into more manageably-sized pieces.
Hint: You can click on any image to see a larger version.

[wptabs style=”wpui-alma” mode=”vertical”]

[wptabtitle] INITIAL CLEAN-UP[/wptabtitle]

[wptabcontent]This workflow may follow “Scanning, and Processing with Targets in Complex Interior Spaces” as a suggested method of sub-dividing registered, unified point cloud data into more manageably-sized pieces. In this example, data is broken up so that it can be imported into AutoCAD and REVIT; as of early 2011, these Autodesk platforms are becoming better at accommodating point clouds, however, currently, only reduced data sets or very small projects can be exported from Cyclone to these software in their entirety due to size limitations. This is a suggested workflow for quickly breaking up data for importation into such Autodesk applications.

I. Initial Clean-Up & Settings – Create MS from registered, unified point cloud > Delete extraneous points and any noise > Save this as Piecing_Base

NOTE: In this example, the original data of a complex interior chilling plant ranged in density from approximately 1cm at the most dense to 10 cm at the least dense. After registration, this data was unified with the medium reduction setting. This resulted in more consistent 2.5 cm spacing in the majority of the data that proved to be manageable in exporting to other formats and software.

II. Set the view to Orthogonal > Viewpoint Tab > Orthogonal [/wptabcontent]

[wptabtitle] SETTING HORIZONTAL REFERENCE PLANES[/wptabtitle]

[wptabcontent]III. Setting Horizontal Reference Planes – Once the data is clean a series of reference planes will be created to serve as visual aids to slice the data into smaller pieces. Open the Reference Plane dialogue box and leave it open for the workflow.

A. Open RP dialogue box > Tools > Reference Plane > Add/Edit Reference Plane > Click the visibility box to activate the original, default reference plane (RP); this is usually a horizontally oriented plane that is aligned to the X-Y plane of your data

1. If your data needs to be divided up vertically and horizontally (ie: multiple levels of dense data), double click the default RP and rename it Horiz_Level_001 > follow Step III-B

2. If your data does not need to be divided up vertically (ie: single level of data), move to Step IV

B. Move RF_Horiz_Level_001 to a significant horizontal point in the data > In this example, a reference plane was set on a point on the floor dividing 2 levels > Zoom into the data so that you can see the floor or significant horizontal feature > use Selection tool to select a point on the feature > Tools > RP > Set Plane Origin at Pick Point

[/wptabcontent]

[wptabtitle] CREATE ADDITIONAL HORIZONTAL RP’S[/wptabtitle]

[wptabcontent]C. Create additional horizontal RP’s to divide up the data in the vertical direction > Within the RP dialogue box, left click on RP_Horiz_Level_001 to select it > Use the copy icon at the top of the dialogue box to Copy Reference Plane > Name the new RP to identify the level (Horiz_Level_002, _003, etc) > LC the new RP > LC the icon at the top of the dialogue box to Set Active Reference Plane > The new RP is now active and can be placed at significant features as outlined in Step III-B (see figures 1 & 2).

clip_image002[6]Figure 1 – The Reference Plane dialogue box has been opened and the default reference has been named RP_Horiz_001 and placed on the floor dividing the two levels; Note the main level above the RP and the basement below. This building required only one horizontal plant

clip_image004[6]

Figure 2 (Left) – A closer view of the dialogue box and the options for copying/adding RP’s and for activating the RP[/wptabcontent]

[wptabtitle] ADDTIONAL REFERENCE PLANES & ESTIMATING DATA SIZE[/wptabtitle]

[wptabcontent]IV. Additional Reference Planes & Estimating Data Size – Ultimately, how the data is divided depends on the original density and complexity of the point cloud data and the needs of the end-user/end-software. The goal is to balance between creating consistently-sized pieces and using sufficient overlap to allow re-assembly with minimal redundancy. Understanding the complexity within the different areas of your project is essential to understanding how/why you place sectional cuts. 5 square meters of measured space can have more points than 20 square meters of space depending on what is within that volume (ie: a complicated network of pipes versus an empty room).

In this example, the unified point cloud (48,000,000 points) needed to be divided into 300-400 MB pieces in a DXF format to import into REVIT.

Note: Users may find it helpful to adjust the RP’s to match the coordinate system or vice versa; this is covered in the Basic Cyclone Workflow and is not covered here. In this example, features within the scan data (walls and floors) are used to divide the space instead of defined coordinates. IMPORTANT TIP: The medium reduction unified cloud creates exports with a clear ratio between the number of points and the size of the resulting DXF file in which 1 million points roughly creates a 1 megabyte DXF file. [/wptabcontent]

[wptabtitle] CREATING ADDITIONAL REFERENCE PLANES[/wptabtitle]

[wptabcontent]V. Creating Additional Reference Planes

A. Create a copy of an existing RP (see figure 2) > Make the new plane active and name it RP_Vertical_001

NOTE: turning off the visibility of RP’s that you are not actively using helps to understand what you’re seeing; this is done in the RP dialogue box

B. Tools > RP > set Vertical_001 to the Y-Z Plane or to the Z-Y Plane depending on the orientation of your data and the direction that you want to grid it > Visually examine the data to see if this standard view aligns to the grid you desire

1. RC anywhere in the area where the toolbars are located > Enable the Viewing Toolbar > Top view while in Orthographic viewing mode allows you to see if the reference plane is aligned to the feature (an interior wall for example) that you are using to divide the data > If the plane is aligned to your data, move to Step D.

2. Note that Cyclone toolbars often hide one another and you may need to pull them into the main viewing area to see all of them

[/wptabcontent]

[wptabtitle]CREATING ADDITIONAL REFERENCE PLANES 2[/wptabtitle]

[wptabcontent]
C. If a standard view or coordinate does not align to the feature, leave the vertical RP in place:

1. Create a fence and copy a piece of the data that contain a vertical feature that runs the direction that want to grid/the direction in which you want to divide the data (here the exterior walls running north-south and east-west are used to divide the data so a corner is copied to a new MS)

 

clip_image006[6] clip_image008[6]Figure 3 (Left) – Top View, a corner is fenced; Figure 4 (Right) – Perspective View, the corner section copied into its own MS

[/wptabcontent]

[wptabtitle]CREATING ADDITIONAL REFERENCE PLANES 3[/wptabtitle]

[wptabcontent]2. In the new temporary MS, select a point on the wall > Create Object > Region Grow > Patch > View results in MS before accepting results, adjusting settings as needed to create a small vertical patch on each wall/feature being used to grid and divide the data

clip_image010[6]
clip_image012[6]
Figure 5 (Left) – A patch is created on each wall/vertical feature; Figure 6 (Right) – The resulting patches with the point cloud hidden

clip_image014[6]

Figure 7 (Left) – Top view in orthographic mode confirms that the planes are perpendicular and have the correct z-direction

3. Select the patches (here there are 2 patches representing the north-south and east-west directions) > Copy > Close/delete the temporary MS

[/wptabcontent]

[wptabtitle]CREATING ADDITIONAL REFERENCE PLANES 4[/wptabtitle]

[wptabcontent] 4. In the original MS (Piecing_Base), open the Layers dialogue box (Shift + L) > Create a layer called Patches > Make the new layer current > Paste the patches into the base MS

Note: The use of a temporary MS to create the patches allows the unified point cloud in the base to remain as a single cloud; when an object such as a patch is created, the points are automatically deleted and if you choose to insert a copy of the deleted points, they are individual sub-set point clouds resulting in undesired multiple clouds in the base.

5. Select a patch > Confirm that RP_Vertical is active and visible > Tools > RP > Set on object > Activate the top view to confirm that the RP is now aligned to the wall/feature

6. Copy/add a vertical RP for each patch/direction > Turn off the visibility of the layer Patches

clip_image016[6]Figure 8  – Two patches representing the two walls are pasted into the base MS

 

clip_image018[6]Figure 9  – The vertical reference plane is aligned to the wall/patch

[/wptabcontent]

[wptabtitle]CREATING ADDITIONAL REFERENCE PLANES 5[/wptabtitle]

[wptabcontent]

D. Once you have a vertical RP that is aligned to each wall/patch > Create copies of the RP’s and move them to significant points/features in the project that will allow consistently sized pieces (You are basically placing guides by which you will cut the data into pieces)

1. Choose the first direction to grid > Select the vertical RP (RP_Vertical_001) > Confirm it is visible and turn off the visibility of the other RP’s

2. Copy Vertical_001 > Name the new RP Vertical_002 > Make Vertical_002 active

3. Select a point along the direction you are gridding to define the location for Vertical_002

a. Look for regularly occurring features (such as panels on the roof or interior walls/features) to begin placing the RP guides

 b. Use the measure tool (Multi-Select 2 points > Tools > Measure > Distance > Point to Point) to further examine the distance between divisions you are considering

[/wptabcontent]

[wptabtitle] CREATING ADDITIONAL REFERENCE PLANES 6[/wptabtitle]

[wptabcontent]4. Note the information regarding number of points and data size at the beginning of this section

a. The complexity of the data within the section affects the file size of each piece tremendously. It’s a matter of testing and evaluating until you become familiar with the density of your data and find what division/grid size will work

b. NOTE: the goal is to have consistently-sized data evaluated by the number of points/resulting megs; this does not always mean pieces have similarly-sized physical dimensions.

5. Once the point is selected > Tools > RP > Set Plane Origin at Pick Point

[/wptabcontent]

[wptabtitle] CREATING ADDITIONAL REFERENCE PLANES 7[/wptabtitle] [wptabcontent] You have set up the guides for the first sectional piece to be copied out and exported > It is recommended that you place the vertical RP guides one at a time, checking the size of the resulting point cloud and export files until you have a good understanding of how to divide up the data consistently > Going through the export process is the only way to verify the resulting file size and to confirm that your grid dimensions are appropriate to the complexity and density of each area.

clip_image020[6]

Figure 10 – Top view of the north end of the building; the visibility of the horizontal RP’s has been turned off for clarity. The top green line is the first vertical reference plane (Vertical_001) that was created from the wall/patch. The bottom green line is Vertical_002 that has been moved by selecting a point on the seam between metal panels on the roof. The area between the two RP’s represents Piece_001. This piece/sectional slice will be copied and exported to see if the estimated location for the RP results in an appropriate file size.

[/wptabcontent]

[wptabtitle]CONTINUE TO…[/wptabtitle]

[wptabcontent] Continue to Leica Cyclone 7.1.1: Sub-Dividing Large Point Clouds for Autodesk Applications – Exporting the Data [/wptabcontent] [/wptabs]

Posted in Workflows | Tagged , , , , , , , , , , ,

Instructions for Using Leica’s TruView Viewer

[wptabtitle] DOWNLOAD[/wptabtitle] [wptabcontent]1. Download Leica’s FREE TruView Internet Explorer Plug-in here.

NOTE: You are able to use TruView once without the plug-in; The plug-in is for Windows Internet Explorer only.

2. Open a TruView Site Map in Internet Explorer.  Here is an example Site Map from the University of Arkansas Chilling and Heating Plants.  And here is an example Site Map from the University’s Vol Walker Hall.[/wptabcontent]

3. The site map opens with each of the scanner locations marked with a yellow triangle and the name of the scan > These icons are hyperlinks into the TruView space where you have a view from the scanner’s location.  You will see exactly what the scanner saw from that location, and depending on settings, the other scanner locations may be visible; if an error appears, confirm that you are using Internet Explorer

Figure 1 – Hyperlink that allows you to enter viewpoint of the scanner position from the TruView site map.  The number represents the scanner station identification. 

 

4. Left Click in the center of the icon to open the TruView Space

[wptabtitle] USING TRUVIEW[/wptabtitle] [wptabcontent]5. Once in the TruView Space, navigate, take measurements, and move between scanner locations as wanted.

Basic Navigation: PAN = Left Mouse + Drag, ZOOM = Roll Middle Mouse.

Also see the information tabs on the left and the tool icons:

Measure Tab – shows the properties of measurements taken and allows the user to set the units; This is also where the user adjusts the visibility of other scan locations (ie: Neighbor TruViews) and the visibility of the point cloud based on range or altitude (when these fields are not checked, the full range and altitude is displayed)
Markup Tab – displays the information about the user’s markups and allows markup.xml files to be imported and exported
View Tab – Allows the user to create and save specific views Toolbar Icons:

[/wptabcontent]

[wptabtitle] TRUVIEW ICONS[/wptabtitle] [wptabcontent]6. TruView Icons

 [/wptabcontent]

Posted in Leica Truview, Workflows | Tagged , , , , ,

Using Leica’s COE Plug-In in AutoCAD

This workflow will show you how to use Leica’s COE plug-in to import and export objects and points in AutoCAD.
Hint: You can click on any image to see a larger version.

[wptabs style=”wpui-alma” mode=”vertical”]

[wptabtitle] INTRODUCTION[/wptabtitle] [wptabcontent]

Using COE Plug-In to Import & Export Objects and Points in AutoCAD

Leica’s CloudWorx and COE (Cyclone Object Exchange) applications allow the user to view, import, and export files between Cyclone and CAD software while maintaining links and information. This relationship allows the user to take advantage of the accuracy of the point cloud data and the advanced modeling functions in AutoCAD; it also makes comparisons, analysis, and visualizations possible.

The COE import/export tool allows points and modeled objects to be imported/ exported/edited between Cyclone and ACAD (objects can be imported/exported with or without the point cloud data) and must be installed separately. Objects and points can be directly edited in the CAD environment. Importing/exporting points with the COE tool is only recommended for small point sets (less than 1,000,000 points).  See the GMV guide ‘Leica CloudWorx 4.2 and AutoCAD 2012’ for more information about the CloudWorx application.

[/wptabcontent]

 


 

[wptabtitle] CONFIRM INSTALLATION AND SET UNITS[/wptabtitle] [wptabcontent]

I. In ACAD, confirm that the COE tool has been installed (enter COEIN in the command line in ACAD; an import options dialogue box should appear).  If it is not installed, download the installation file from the Leica downloads page – check for updates as the plug-in changes as versions update). During installation, the .exe will open AutoCAD, command line (F2) will note the installation and the availability of new commands such as COEIN & COEOUT.

II. UNITS -> To avoid complications, it is highly recommended to work in the same units as the Cyclone Files > CHECK UNITS IN ALL NEW DRAWINGS BEFORE DRAWING, MODELING, OR IMPORTING > In new ACAD drawing -> Command Line: UNITS > Adjust options to match Cyclone > Generally, Type: Decimal, Insertion Scale Units: Meters > OK

[/wptabcontent]

 

 

[wptabtitle] COE COMMANDS[/wptabtitle] [wptabcontent]

III. The following COE commands may be accessed through these entries or through the toolbar

COEIN :Import COE files into ACAD
COEOUT: Export contents of drawing to COE file
COEXPLODE: explodes blocks into individual objects
COEANN: toggles annotations from cyclone on/off

 

[/wptabcontent]


 

[wptabtitle] IMPORT OBJECTS INTO ACAD[/wptabtitle] [wptabcontent]

IV. To import objects into ACAD -> command line: COEIN -> dialogue box appears -> Under ‘Import File Name’ browse for file -> Import Options:

  • Import objects as ACIS Solids -> imports objects as 3D ACIS solids
  • Import Objects as Blocks -> Blocks are made up of one or more objects that are combined to create a single object; using blocks results in the best compression rate when objects are brought to ACAD (per Cyclone 7.0 Help) and are useful when repeating objects. When editing a block, all similar blocks are simultaneously edited. A block must be exploded or “re-written” to remove or adjust this link between identical blocks. Dynamic blocks are more easily editable and are only available in newer versions of ACAD. See ACAD help -> “Work with blocks” for more information
  • Import Point Sets -> In general, CloudWorx is recommended for large point clouds (1,000,000+ points), while the COE tool is recommended for objects and smaller point clouds (less than 1,000,000 points) -> If Import Point Sets is selected and you experience problems/crashing, it is recommended that you either decrease the size of the point cloud for import or that you use CloudWorx to view the point cloud versus import it
  • Log files -> log files are automatically created, adjust path as desired
  • Units Preference -> UNITS MUST MATCH (see Step II) -> In most cases, scan data is acquired in meters; if imported in a different unit, ACAD should automatically apply a scale factor but using the same units avoids problems (If acquired and imported in the same unit, this scale factor will be 1)

[/wptabcontent]

[wptabtitle] EXPORT OBJECTS FROM AUTOCAD[/wptabtitle] [wptabcontent]

 

V. To export objects from AutoCAD -> Toolbar or command line: COEOUT -> Dialogue box appears -> Browse to project folder and name file -> Select Objects to Export -> OK -> Export Options:

  • Export objects USING ORIGINAL COE file as reference: When selected, the COE file that is created/exported from ACAD references the original Cyclone data set enabling as much original object information/data as possible to be maintained when the object is brought back into Cyclone; select when integrating ACAD data with the original COE file from Cyclone.Export objects WITHOUT original COE reference file: When selected, the COE file is new and maintains no connection or reference to Cyclone data set or original COE file.
  • Export Objects: choose option All, Visible, or Selected objects
  • Units: as in importing in Step VI, UNITS MUST MATCH -> In most cases, scan data is acquired in meters; if exported in a different unit, a scale factor should automatically apply but using the same units avoids problems (If acquired and imported in the same unit, this scale factor will be 1)

[/wptabcontent]

 

[wptabtitle]CONTINUE TO…[/wptabtitle] [wptabcontent]Continue to Using Leica’s COE Plug-In in Cyclone.[/wptabcontent]

[/wptabs]

 

 

Posted in Leica CloudWorx, Workflow, Workflows | Tagged , , , , ,

University of Arkansas – Vol Walker Building – Exterior

Working with the University of Arkansas’ Facilities Management and Planning Departments, CAST is documenting the historical Vol Walker Building and its renovation. Here are merged scans of the exterior of the building which were collected with the Leica C10 Scan Station scanner. The project includes multiple floors within the building interior as well as the building exterior.Exterior scans were collected with a point spacing of approximately 5-10 cm. The data sets have been separated due to file size and data density. Interior scans were collected with a point spacing that ranged from less than a centimeter at the most dense (at a range of < 1 meter) to approximately 5 cm at the least dense (at a range of 25 meters). These scans were then reduced to a more consistent point spacing of 1 cm for potential future use in historical preservation documentation.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Vol_Walker_Building_Exterior .zip (41.9 mb) (7.5 cm spacing in .pts file format)

Sitemap_Vol_Walker_Exterior.htm -Explore the data set in Leica TruView, which requires Leica TruView free viewer and Internet Explorer.  For instructions on using the free TruView data viewer and for a complete list of links to the TruView data related to this project, please see: Accessing Vol Walker Interior TruViews.

Please note. This data is distributed under a Creative Commons 3.0 License  (seehttp://creativecommons.org/licenses/by-nc/3.0/ for the full license). You are free to share and remix these data under the condition that you include attribution as provided here.  You may not use the data or products in a commercial purpose without additional approvals. Please attach the following credit to all data and products developed there from:

Credits:

Data was collected in collaboration with University of Arkansas Facilities Management, Operations and Maintenance and Campus Planning Divisions with outstanding assistance from Bob Harris, Construction Coordinator.

Posted in United States, University of Arkansas, Fayetteville scanning | Tagged , , , , , , ,

University of Arkansas – Vol Walker Building – Interior Basement Level

Working with the University of Arkansas’ Facilities Management and Planning Departments, CAST is documenting the historical Vol Walker Building and its renovation. Here are merged scans of the basement or ground floor of the interior, which were collected with the Z+F 5005i Scanner. The project includes multiple floors within the building interior as well as the building exterior. Interior scans were collected with a point spacing that ranged from less than a centimeter at the most dense (at a range of < 1 meter) to approximately 5 cm at the least dense (at a range of 25 meters). These scans were then reduced to a more consistent point spacing of 1 cm for potential future use in historical preservation documentation. Exterior scans were collected with a point spacing of approximately 5-10 cm. The data sets have been separated due to file size and data density.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Vol_Walker_Building_Interior_Basement .zip (4.4 mb) (1 cm spacing in .pts file format)

Sitemap_Vol_Walker_Basement.htm -Explore the data set in Leica TruView, which requires Leica TruView free viewer and Internet Explorer.  For instructions on using the free TruView data viewer and for a complete list of links to the TruView data related to this project, please see: Accessing Vol Walker Interior TruViews.

Please note. This data is distributed under a Creative Commons 3.0 License  (seehttp://creativecommons.org/licenses/by-nc/3.0/ for the full license). You are free to share and remix these data under the condition that you include attribution as provided here.  You may not use the data or products in a commercial purpose without additional approvals. Please attach the following credit to all data and products developed there from:

Credits:

Data was collected in collaboration with University of Arkansas Facilities Management, Operations and Maintenance and Campus Planning Divisions with outstanding assistance from Bob Harris, Construction Coordinator.

 

 

Posted in Scanning, United States, University of Arkansas, Fayetteville scanning | Tagged , , , , , , , , , , ,

University of Arkansas – Vol Walker Building – Interior Basement Level

Working with the University of Arkansas’ Facilities Management and Planning Departments, CAST is documenting the historical Vol Walker Building and its renovation. Here are merged scans of the basement or ground floor of the interior, which were collected with the Z+F 5005i Scanner. The project includes multiple floors within the building interior as well as the building exterior. Interior scans were collected with a point spacing that ranged from less than a centimeter at the most dense (at a range of < 1 meter) to approximately 5 cm at the least dense (at a range of 25 meters). These scans were then reduced to a more consistent point spacing of 1 cm for potential future use in historical preservation documentation. Exterior scans were collected with a point spacing of approximately 5-10 cm. The data sets have been separated due to file size and data density.

 

 

 

 

 

 

 

 

 

 

 

 

 

 

 

Vol_Walker_Building_Interior_Basement .zip (4.4 mb) (1 cm spacing in .pts file format)

Sitemap_Vol_Walker_Basement.htm -Explore the data set in Leica TruView, which requires Leica TruView free viewer and Internet Explorer.  For instructions on using the free TruView data viewer and for a complete list of links to the TruView data related to this project, please see: Accessing Vol Walker Interior TruViews.

Please note. This data is distributed under a Creative Commons 3.0 License  (seehttp://creativecommons.org/licenses/by-nc/3.0/ for the full license). You are free to share and remix these data under the condition that you include attribution as provided here.  You may not use the data or products in a commercial purpose without additional approvals. Please attach the following credit to all data and products developed there from:

Credits:

Data was collected in collaboration with University of Arkansas Facilities Management, Operations and Maintenance and Campus Planning Divisions with outstanding assistance from Bob Harris, Construction Coordinator.

 

 

Posted in United States, University of Arkansas, Fayetteville scanning | Tagged , , , , , , ,

University of Arkansas – Vol Walker Building – Interior Floor 3

Working with the University of Arkansas’ Facilities Management and Planning Departments, CAST is documenting the historical Vol Walker Building and its renovation. Here are merged scans of the third floor of the interior, which were collected with the Z+F 5005i Scanner. The project includes multiple floors within the building interior as well as the building exterior. Interior scans were collected with a point spacing that ranged from less than a centimeter at the most dense (at a range of < 1 meter) to approximately 5 cm at the least dense (at a range of 25 meters). These scans were then reduced to a more consistent point spacing of 1 cm for potential future use in historical preservation documentation. Exterior scans were collected with a point spacing of approximately 5-10 cm. The data sets have been separated due to file size and data density.

 

 

 

 

 

 

 

 

 

 

 

 

 

Vol_Walker_Building_Interior_Floor_3 .zip (4.4 mb) (1 cm spacing in .pts file format)

Sitemap_Vol_Walker_Floor_3.htm -Explore the data set in Leica TruView, which requires Leica TruView free viewer and Internet Explorer.  For instructions on using the free TruView data viewer and for a complete list of links to the TruView data related to this project, please see: Accessing Vol Walker Interior TruViews.

Please note. This data is distributed under a Creative Commons 3.0 License  (seehttp://creativecommons.org/licenses/by-nc/3.0/ for the full license). You are free to share and remix these data under the condition that you include attribution as provided here.  You may not use the data or products in a commercial purpose without additional approvals. Please attach the following credit to all data and products developed there from:

Credits:

Data was collected in collaboration with University of Arkansas Facilities Management, Operations and Maintenance and Campus Planning Divisions with outstanding assistance from Bob Harris, Construction Coordinator.

Posted in United States, University of Arkansas, Fayetteville scanning | Tagged , , , , , , ,

Leica CloudWorx 4.2 and AutoCAD 2012 – Digitizing a Point Cloud in 2D

This workflow will show you how to install and use Leica’s Cloudworx plug-in for AutoCAD.
Hint: You can click on any image to see a larger version.

[wptabs style=”wpui-alma” mode=”vertical”] [wptabtitle] WHY USE BOTH PLATFORMS[/wptabtitle]

[wptabcontent]Why use both platforms? : Leica’s CloudWorx plug-in allows the user to view, measure, and draft in an AutoCAD environment, referencing the point cloud. AutoCAD does not load millions of points but treats the point cloud as a “read-only external reference file”.  This relationship allows the user to take advantage of the accuracy of the point cloud data and the advanced modeling functions in AutoCAD; it also makes comparisons, analysis, and visualizations possible.  You do not need to have Cyclone installed on a computer to use CloudWorx as it is a standalone plug-in.

The CloudWorx Plug-in installs the menu and toolbars into CAD, allowing CAD applications to utilize/view point cloud data in a read-only format. The point cloud can be viewed and referenced but it cannot be modified or deleted through CAD, thus the original point cloud data is preserved. CloudWorx is recommended for large data sets (1,000,000 or more points).

NOTE: There is also a COE import/export tool available.  This allows points and modeled objects to be imported/ exported/edited between Cyclone and ACAD (objects can be imported/exported with or without the point cloud data) and must be installed separately. Objects and points can be directly edited in the CAD environment. Importing/exporting points with the COE tool is only recommended for small point sets (less than 1,000,000 points)
[/wptabcontent]

[wptabtitle] INSTALLING THE CLOUDWORX PLUG-IN[/wptabtitle]

[wptabcontent]

To Install the CloudWorx Plug-In:

I. Run executable file:CloudWorxForAutoCAD422.exe (this filemay be downloaded from the Leica downloads page – check for updates as the plug-in changes as versions update). CloudWorx toolbars appear in AutoCAD workspace upon installation (see Figure 1).

NOTE: Changing/updating Cyclone software licenses (such as switching from a floating license to a node-locked license or vice versa) may affect the installation of CloudWorx plug-ins. If licensing has recently changed and you experience problems installing or accessing plug-ins confirm that the Cyclone installation has been updated -> Run original installation file > In dialogue box choose the “Update/Reinstall” option -> re-install the plug-ins. If this does not work, Cyclone may need to be completed un-installed and re-installed.

Figure 1 CloudWorx Toolbar appears at top of AutoCAD screen upon installation

II. Allow the installer to open AutoCAD to configure the plug-in.  AutoCAD should close automatically after this configuration.  If it does not close automatically, close it.
[/wptabcontent]

[wptabtitle]INSTALLING CLOUDWORX CONT.[/wptabtitle]

[wptabcontent]III.  After you have completed the Cloudworx installation, open AutoCAD > If the CloudWorx toolbar (in Figure 1) does not appear, type CWMENU into the command line in AutoCAD.  Read the command line.  It should read “Customization file loaded successfully”.  If it was already configured, the command line states “Customization file unloaded successfully” > If unsuccessful, type CWMENU again to load CloudWorx.

IV. If the toolbar does not appear although it shows as loaded, re-start AutoCAD.

V. In addition to the toolbars, you must also load the menu options > type MENULOAD in the command line in AutoCAD > type CloudWorx in the ‘load custom file field” and close > A CloudWorx menu tab will appear at the top of the screen (see Figure 2)

 Figure 2 CloudWorx Menu Tab appears once loaded

[/wptabcontent]

[wptabtitle] SETTING UP YOUR MODEL SPACE IN CYCLONE[/wptabtitle]

[wptabcontent]

Setting Up your ModelSpace in Cyclone :

Cyclone databases, with names ending in the .imp extension, are contained within the server on Cyclone; these files may be stored on a network; it is recommended to copy the files locally while modeling; copy all files including the Database folder (containing the .impfile, pcesets, eventlog folder, and recovery folder) and the Scan Folder (containing the Stations/scans, the ControlPoints.ini, and the project.ini).

I. Cyclone ModelSpace: Copy out feature or area to be modeled into a working MS -> Clean/delete un-needed data (be sure to view the data in all directions x, y, z)

II. Unify clouds to be modeled if they are not unified already (Select All -> Tools -> Unify Clouds)

III. Set Default Cloud > Select All -> Tools -> Scanner -> Set ScanWorld Default Scans –> Close (Dialogue box appears prompting for you to merge, remove, or delete the MS.  DO NOT any of these, Select Close)

NOTE: It is recommended to create a unique, copied MS that is only used with CloudWorx; actively editing the Cyclone MS that is being modeled in ACAD can cause visibility/access issues in the CloudWorx application

 IV. Cyclone Navigator: Rename the MS and the MS view to identify the feature and to include the tag ‘CloudWorx’ to avoid future confusion > RC and rename the MS and the MS View

V. Cyclone Navigator: Remove the database from the server > Configure > Databases > Select database > Remove > DO NOT DESTROY > Close Cyclone Navigator[/wptabcontent]

[wptabtitle] SETTING UP MODEL SPACE IN AutoCAD[/wptabtitle]

[wptabcontent]

 Setting Up a Model Space in AutoCAD :

I. Create a New 3D Drawing > In ACAD -> Start Menu > New > Drawing > Choose 3D.dwt file in template library > Set Units (Generally, Type: Decimal and Insert Scale Units: Meters) > The new drawing MUST be 3D; if you are not sure, create a new drawing and confirm this template.  See ACAD help if needed

II. Set ACAD Object Snaps (OSNAP) – Confirm that OSNAP’S are turned on and the node-snap is active -> Command line: OSNAP -> at dialogue box clear all and check ‘Node’ -> OK -> Or use the OSNAP icon at bottom of the ACAD screen -> LC toggles osnap on/off and RC gives options/settings

III. Default Coordinate System – By default, ACAD opens a World Coordinate System (WCS) which is coincidental with the scan world’s coordinate system. All views, grids and many operations are based on the active coordinate system. If the desired feature to be modeled is aligned with a standardized axis within this system, no adjustments are needed and modeling should take place in the WCS. If the section/slice is not aligned with a standard axis, ACAD’s coordinates can be temporarily aligned to the User’s Coordinate System (UCS) to accommodate the slicing and modeling processes (see Aligning ACAD View below).
[/wptabcontent]

[wptabtitle] VIEWING A POINT CLOUD IN AutoCAD WITH CLOUDWORX[/wptabtitle]

[wptabcontent]

 Viewing a Point Cloud in AutoCAD with CloudWorx :

I. ACAD: Configure Database to establish a connection with Cyclone’s database and point cloud engine > CloudWorx Main toolbar > Configure Database Icon > Expand Servers folder > Highlight existing server (by default this is your computer) > Click Databases > Add > Browse for local .imp file > Open > OK

NOTE: Once a connection to the database is configured, you do not need to repeat this step, you can start with Step II: Importing a MS View

II. Select CloudWorx Icon to Import ModelSpace View  > Dialogue box appears > LC the browse button > It may take several seconds to populate, be patient! > Drill down to locate the MS View created in Cyclone (when you have drilled down to the correct MS view level, the ‘Open’ button will become active) > Select Open

III.  Confirm ACAD units (generally meters) & coordinate systems (by default, the coordinate system coincides with the system in Cyclone) > OK (this may take several minutes when ACAD appears frozen, again be patient!)[/wptabcontent]

IV. CloudWorx Visibility: > on main CloudWorx toolbar, select visibility icon > Confirm that point clouds are visible and can be snapped to

V. The MS will appear in the ACAD window.  Depending on the size of the MS you are viewing, this may take several seconds to several minutes.  If it does not appear, zoom extents and you should see the extents of the entire MS represented with a point at each corner and the area you isolated in Cyclone as a denser area within this boundary.

If the point cloud appears very sparse, click the Regenerate Point Clouds Icon .

If the point cloud still appears sparse, try adjusting the Point Density

Also note the point cloud viewing options: View Intensity  and View Colors (RGB) from Scanner

[wptabtitle] ALIGNING ACAD VIEW[/wptabtitle] [wptabcontent]VI. Aligning ACAD View for Digitizing a top/plan view > > In a side or front view, zoom to the first object you are drawing (if zooming is hard to control when the point clouds are referenced, use the zoom window for better control) > If the pre-set front/side view is not aligned to your object(s), use the orbit command to align your view as much as possible

VII. Slice the cloud to represent your section cut >

LC on the CloudWorx Slice Icon    to expand the options > Move down to the appropriate slice option and LC > Select the first/lower clipping plane by clicking the endpoint of your lower reference line > Select the second/upper clipping plane by clicking the endpoint of your lower reference line > The point cloud is now clipped to show the plan cross section cut.  To restore the entire cloud and remove the effects of the slice, select the ‘Reset all clipping’ icon .  Once you are satisfied, turn off the ‘Slicing Reference’ layer.

[/wptabcontent]

[wptabtitle] BEGIN DIGITIZING THE POINT CLOUD[/wptabtitle] [wptabcontent]X. Begin Digitizing the Point Cloud > If you are digitizing in three dimensions, use the snaps to snap to the cloud directly.  If you are digitizing in two dimensions, disable the snap to cloud option  and your drawing will be on a projection plane that is aligned to the coordinate system that is active while drawing.

NOTE: If you want to utilize the ability to snap to points, it is recommended that you snap to the points and then adjust the properties of the line work so that the elevation and start/end z position are zero.[/wptabcontent]

[wptabtitle] HIDING REGIONS[/wptabtitle]

[wptabcontent]VII. Hiding Regions is useful to isolate a specific area or feature and to ease/speed up modeling and navigation in large data sets -> CloudWorx Tab -> Hide/Restore Point Cloud (or icon on toolbar) -> Inside Fence -> draw polygon around area to model -> area is isolated as other points are hidden -> To Restore points -> Hide/Restore Point Cloud -> Hide Regions Manager -> Uncheck the State check box -> OK. BEWARE: If a Cyclone MS has been previously configured in an ACAD drawing, when opened in a new DWG the point cloud may appear with regions hidden from the previous configuration; if all points do not appear always check the Hide Regions Manager and clear if needed.[/wptabcontent]

[wptabtitle] TIPS[/wptabtitle] [wptabcontent]TIPS: If point cloud is not visible or only a few points are visible (especially upon re-opening a drawing) confirm:

  1. Data is stored locally on computer
  2. Cyclone MS: ScanWorld has been unified and ‘Set ScanWorld Default Scans’ has been set
  3. Cyclone MS: Points are visible and selectable
  4. Licenses are available and plug-ins have been installed
  5. ACAD: Points have been regenerated -> command line: CWREGEN
  6. ACAD: Confirm data cloud is present -> command line: LIST -> All -> Enter (twice) -> F2 expands history, which should show several objects listed as CYPOINTBLOCKENT on CloudWorx layer
  7. ACAD: clear all hidden regions and sections/slices that may be hiding cloud 8
  8. ACAD: Adjusting visibility from intensity mapping to scanner color and regenerating also may help appearance

 [/wptabcontent] [/wptabs]

Posted in Leica CloudWorx, Workflows | Tagged , , , , ,

Leica Cyclone 7.1.1: Registering Scans in Cyclone

[wptabs style=”wpui-alma” mode=”vertical”]

[wptabtitle] PRE-REGISTRATION DATA CLEANUP[/wptabtitle]

[wptabcontent]

Pre-Registration Data Cleanup in Cyclone:

If you would like to edit the data (clean up or remove bad or unuseful data) prior to a registration, follow the steps below.

  1. To clean noise or unwanted points from unprocessed/imported point cloud -> Create/Open ModelSpace (MS) -> > Use selection and fencing tools to finish cleaning/deleting cloud -> Select cleaned point cloud -> Tools -> Registration -> Copy to Control Space -> Close MS
  2. Open Control Space -> Select all should show 2 point clouds -> Select original cloud (selecting single point that was deleted/cleaned in MS will select the original point cloud) -> delete the original point cloud so that only the edited/cleaned point cloud remains -> Select all (should be a single edited cloud) -> Tools -> Scanner -> Set scan world default clouds -> Close

[/wptabcontent]

[wptabtitle] CLOUD CONTRAINTS WIZARD[/wptabtitle]

[wptabcontent]

Registering Scans in Cyclone:

From the Cyclone Help Manual:
“Registration is the process of integrating the ScanWorlds for a project into a single coordinate system as a registered ScanWorld. This integration is derived by using a system of constraints, which are pairs of equivalent or overlapping objects that exist in two ScanWorlds. The objects involved in these constraints are maintained in a ControlSpace, where they can be reviewed, organized, and removed. They cannot be moved or resized in the ControlSpace.
The registration process computes the optimal overall alignment transformations for each component ScanWorld in the registration, such that the constraints are matched as closely as possible.”
  • To register scans together in cyclone, go to Create – Registration. Open the Registration
  • From the Registration window, RC and add all of the ScanWorlds that you want to register.
  • Open the Cloud Constraints Wizard and choose 2 ScanWorlds that you want to add constraints to and click update.

* Tip: To remove a picked point, simply select on the point again.[/wptabcontent]

[wptabtitle] PICK MATCHING POINTS[/wptabtitle]

[wptabcontent]

Figure 1: Locking two scan views so that the views rotate/pan/etc… together

 

  • Next Pick at least 3 matching points between the 2 viewers. It is CRITICAL to use the Multi-pick button (single pick will erase all other previously picked points in that ScanWorld) and to also ZOOM IN CLOSE when picking points. Also pick points that are spread out across the scan and across different geometries/planar directions. Once the two scan views are aligned to one another, you can also use the Lock button to constrain the two views so that they move/rotate together. Once you have selected all of your points, select Constrain from the Cloud Constraints wizard or by going to the Constraint – Add Cloud Constraint. If the Constraint does not work, pick more points and/or remove any bad or incorrect points.NOTE: Cyclone only uses 3 pick points; if you pick 4 or more cyclone will only use the best 3.

 

[/wptabcontent]

[wptabtitle] OPTIMIZE CONSTRAINT[/wptabtitle]

[wptabcontent]

Once the constraint is successfully added, got to the Constraints tab, select the constraint then go to the Cloud Constraint menu and select Optimize Constraint. A histogram appears showing each of the iterations being run – a very steep curve on the left hand side of the histogram is ideal. If the Optimize results come back as under constrained, the results may still be useable in registration. Always observe the Error Vector/RMS error from the constraint optimization, it should always be around .01m – if it is not – pick more accurate points and Update the constraint.

NOTE: Choosing points or using targets to place constraints aligns two individual ScanWorlds. Global Registration uses all of the constraints/ScanWorlds to spread the error among constraints throughout the project; In the registration, the Constraint List tab has several options (continue to next tab for an explanation of these options.[/wptabcontent]

[wptabtitle] CONTRAINT LIST OPTIONS[/wptabtitle]

[wptabcontent]

  • Weight on constraints are created such that their error contribution to the global registration is scaled relative to the error contribution of HDS Target constraints with weights of 1.0; the value can be set from 0.0 (constraint has no effect on registration) to 1.0; a constraint set to 0.5 has half the influence of a constraint set to 1.0. Decreasing weight of a constraints with higher than average error is not recommended as it leads to lower overall accuracy. You should decrease weight only if you believe the measurement of one or both of the objects in the constraint are less accurate than the other constraints.
  • The Error column lists the error of each constraint after global registration; this is the distance between the two constraint objects after optimal registration for their ScanWorlds; Important: this number relates the individual constraint error to the home ScanWorld and becomes relevant only when all ScanWorlds are registered – therefore this value can be ignored (and may increase/vary greatly) during alignment and should be reviewed only after registration.
  • Error Vector or Root Mean Square (RMS) error represents the error for the individual constraint between two ScanWorlds. This error is directly related to the accuracy and density of the data (a general rule of thumb is that accuracy should be a 10% or better of point spacing).

[/wptabcontent]

[wptabtitle] CONTRAINT LIST OPTIONS PT 2[/wptabtitle]

[wptabcontent]

  • Max Search Distance is how far cyclone looks to find overlapping data – this was also used in initial pick points, meaning if your initial pick points were larger than this number the initial constraint would have failed.
  • Subsampling Percentage is the amount of data used in the optimization operation; by default this is set to 3% (which is fine when the data has plenty of overlap or when using slow laptop in the field); however, when improving the RMS and tightening up a constraint, increasing this percentage (between 20-80% depending on size of data set) is very useful
  • Max Iterations is the number of times the optimization algorithm repeats; by default this is set to 100; increasing this value to 500 when increasing the subsampled set has shown to improve the RMS, however, any advantage to increasing it beyond 500 is questionable.
  • Optimization Preferences – After running an optimization, if the RMS is still higher than desired, adjusting the parameters of the optimization operation may help improve the alignment > Select the constraint in the constrain list > RC > Cloud Constraint > Edit Parameters >

[/wptabcontent]

[wptabtitle] VIEW YOUR RESULTS[/wptabtitle]

[wptabcontent]
To view the results of the registration between 2 scans, RC on the Constraint in the Constraints List and choose View Interim Results – this will open a model space with the two aligned scans. Check building/window edges and other identifiable features in both scans and make sure that they line up correctly. Go to Tools – Scanner – ScanWorld Explorer – Shift + select both scans and choose the last icon which applies a unique color to each ScanWorld which also helps in visualizing the quality of the alignment (see image below). If you are happy with the results, close the ModelSpace view and do not save.

Figure 2: Using the ScanWorld Explorer in a ModelSpace View to assign each scan a unique color

[/wptabcontent]

[wptabtitle] CONTINUE REGISTERING SCANS[/wptabtitle]

[wptabcontent]

 

  • Next, repeat steps 3-6 for the next scan in the sequence. The logic is to align Scan A to Scan B, Scan B to Scan C, and on around until you align Scan Z back to Scan A
  • Once all constraints have been added, verified and optimized, you are ready to run the registration process. Go to the Registration Menu and choose Register. Once the Registration is complete, the Registration Diagnostics will be displayed indicating the success of the registration. Note the error statistics and SAVE THIS FILE in your root data directory.
  • Now you are ready to create a new ScanWorld that consists of the registered scans. Go to the Registration Menu and choose Create ScanWorld/Freeze Registration.
  • Return to the Cyclone Navigator and a newly created ScanWorld [Registration1] file will be created at the bottom of the scan list in your project. Create a new ModelSpace View from the registered ScanWorld.
  • If you would like to continue modeling in the new created ModelSpace View or to further optimized the registered point cloud, you should consider unifying or merging the datset (see information on Unify/Merge below)

 

[/wptabcontent]

[wptabtitle] UNIFYING OR MERGING MULTIPLE REGISTERED POINT CLOUDS[/wptabtitle]

[wptabcontent]

Unifying or Merging Multiple Registered Point Clouds in Cyclone:

Unifying Point Clouds combines multiple point clouds into a single efficient point cloud within a single ModelSpace. The Unify Command is usually executed within a ModelSpace containing multiple, registered scans with a large number of point clouds; after clouds are unified, you generally want to Set ScanWorld Default Clouds to designate the unified cloud as the default cloud for all subsequent ModelSpaces. Note that Unifying cannot be undone and the ScanWorld Explorer is no longer available so it is highly suggested to copy the original data before unifying. Unifying a dataset also removes redundant overlapping points.

Merging combines two or more objects of the same type to create a single new object. The merge function fills in the space between the selected objects and is particularly useful to fill in areas that have not been scanned directly. If created from point clouds, merged objects are refit to the original clouds depending on the current settings for Object Preferences and fit. Users can access the ScanWorld Explorer after merging.

To Unify Multiple Registered Point Clouds > Select All > Tools > Unify Clouds > Options:

  • Reduce Cloud: Average Point Spacing (optional)
    • Enter a value for that average desired spacing between any two points. This will subsample the data to the entered resolution
  • Point Cloud Reduction (optional)
    • Additional data reduction. If you are dealing with a very complex, large dataset – some data reduction is recommended.

To Merge Multiple Registered Clouds > Select All > Create Object > Merge[/wptabcontent][/wptabs]

Posted in Cyclone, Scanning, Software, Workflows | Tagged , , , , , ,

Leica Cyclone 7.0: Modeling Non-Rectangular Patches

This workflow will show you how to model non-rectangular patches in Leica’s Cyclone.
Hint: You can click on any image to see a larger version.

 

[wptabs style=”wpui-alma” mode=”vertical”] [wptabtitle] THE 2D DRAWING TOOLBAR[/wptabtitle]

[wptabcontent]

Modeling a Non-Rectangular Patch

Quick Discussion on 2D Drawing and Reference Planes

To create a non-rectangular patch, you use the tools on the 2D drawing toolbar. If the drawing toolbar is not enabled, RC anywhere in the Toolbar area and add the Drawing toolbar. As a good rule of thumb, I always dock it on the right side of the screen.

IMPORTANT: All 2D drawings in Cyclone are constrained to a base plane. To view the active reference plane, go to Tools –> Reference Plane –> Show Active Plane. All cyclone projects come with a default XY plane (Z normal) – shown to the below.

Figure 12: The base XY plane in all Cyclone projects.

You can add numerous reference planes to a project Tools –> Reference Plane –> Add/Edit. You can also move reference planes and assign them to selected objects Select Object -> RC -> Set on Object. If you are not working in a specific coordinate system, it is easiest just to move the active reference plane to the object that you are trying to model.

[/wptabcontent]

[wptabtitle]MODEL A NON-RECTANGULAR PATCH[/wptabtitle]

[wptabcontent]Model a Non Rectangular Patch
To model a non-rectangular patch such as a unique outline of a wall, first fence and create a small patch on the wall. Next, go to Tools –> Reference Plane –> Set on Object to align the reference plane along the same direction as the newly created patch. Select the patch, select ViewPoint –> Align to Selection, this constrains the view to be directly perpendicular to the patch. At this point, it is important not to rotate the view, if necessary choose View –> View Lock –> Rotate.

Figure 13: (Left) Drawing a small patch along a section of wall and (Right) Aligning the base reference plane to that patch in order to use the 2D drawing tools

Note: viewing orthogonally or perspectively affects visibility and clarity of features; toggle between views with hot key ‘o’.
Figure 14: Mode Toolbar shows whether current view is orthogonal or perspective

Next choose the appropriate 2D drawing tool, in this case Draw Polygon tool ->Trace the feature of interest -> Accept/Create the drawing by either clicking the green check button at the top of the Drawing Toolbar or RC -> Create. The 2D sketch should turn bright green with orange handle at the intersections. Edit the sketch as needed. Next to create a patch from the 2D sketch, select the sketch -> Create Object –> From Curves –> Patch.

Figure 15: (Left) 2D Curve sketched on building feature profile (Right) Patch fit to the curve

[/wptabcontent]

[wptabtitle]You have Finished![/wptabtitle]

[wptabcontent]You have finished the Leica Cyclone 7.0: Introduction to Modeling workflow![/wptabcontent][/wptabs]

Posted in Cyclone, Modeling, Scanning, Software, Software, Workflow, Workflows | Tagged , , , , ,

Leica Cyclone 7.0: Modeling – Editing, Extending, and Extruding Rectangular Patches

This workflow will show you how to edit, extend, and extrude patches in Leica’s Cyclone.
Hint: You can click on any image to see a larger version.

 

Modeling – Editing, Extending, and Extruding Patches:

[wptabs style=”wpui-alma” mode=”vertical”] [wptabtitle] MAKE RECTANGULAR[/wptabtitle]

[wptabcontent]Make Rectangular: When patches are created, they have multiple edges and vertices. It is a good rule of thumb, to make all patches rectangular before editing them. To make patches rectangular, Multi-select all patches –> Edit Object –> Patch –> Make Rectangular

Figure 7: Patches that have been made rectangular

  1. Note: When made rectangular, patches will sometimes extend beyond the surface being modeled; use the handles to drag the corners/edges so that they are completely within the area of the surface being modeled

    Figure 8 (Left) Surface to apply patch to is highlighted in yellow; (Center) Patch created and made rectangular – note the corners extend beyond the surface being modeled; (Right) Using the handles, the patch has been dragged so that it is completely within the surface being modeled

    [/wptabcontent]

    [wptabtitle] EXTENDING PATCHES[/wptabtitle]

    [wptabcontent]Extending Patches:

    Extending patches extends the boundary of the selected patches to intersect together. There are several options to do this.
    Note: The order in which the patches are extended and the number of patches being extended in a single command affects the calculations that Cyclone makes. Adjusting the order and number of patches involved may create different results depending on the complexity of the geometry.

    To extend all patches to one another –> Multi-select the patches to extend –> Edit Object –> Extend All Objects.
    To extend patches to a single patch –> Multi-select (1) patches to extend (2) the patch which the others are to extend to –> Edit Object –> Extend to Last Selection (Note, you may also extend to a reference plane with this command or Select Object -> RC -> Extend to Reference Plane)
    Next: Continue extending the objects to their perimeters (ie wall boundaries) and using handles to snap adjacent objects/patches together
    [/wptabcontent]

    [wptabtitle] SLICING PATCHES[/wptabtitle]

    [wptabcontent]Slicing Patches:

    As objects/patches are extended, they may extend beyond the plane or object specified. Patches may be sliced by adjacent patches to clean up corners
    Multi-Select Patches to Slice -> Create Object -> Slice -> By All in Selection (every patch selected is sliced where it intersects with every other patch) / By Last Selection (Every patch selected is sliced where it intersects with the last object selected) / By Reference Plane (Every patch selected is sliced by specified Reference Plane)

    Figure 9: (Left) Extended patch has extended beyond the referenced object (single patch makes up area inside yellow triangle) (Right) Patch has been sliced so that it is now divided into two patches where it intersects the reference object (sub-divided patch now highlighted with yellow triangle


    Figure 10: Sub-divided patches can now be deleted, leaving clean corners where patches intersect

    [/wptabcontent]

    [wptabtitle] EXTRUDING PATCHES[/wptabtitle]

    [wptabcontent]Extruding Patches:
    Create the patch –> multi select 1) the patch and 2) a point on the point cloud or another object that you would like to extrude the patch to.

    Figure 11 (Left) Creating a rectangular patch then multi-selecting the patch (1) and a point on the wall (2) to extrude it to (Right) The resulting extrusion

    [/wptabcontent]

    [wptabtitle] CONTINUE TO… [/wptabtitle]

    [wptabcontent] Continue to Leica Cyclone 7.0: Modeling Non-Rectangular Patches.

    [/wptabs]

Posted in Cyclone, Modeling, Software, Workflow | Tagged , , , , ,

Leica Cyclone 7.0: Modeling a Flat Surface

This workflow will show you how to begin modeling in Leica’s Cyclone.
Hint: You can click on any image to see a larger version.

 

[wptabs style=”wpui-alma” mode=”vertical”] [wptabtitle] GETTING STARTED[/wptabtitle] [wptabcontent]

Getting Started

  1. Cyclone Navigator –> Project –> Model Space -> Open/Copy/Delete Model Spaces here
  2. Draw a fence to select the area to be modeled. The fence tools are located in the ‘Mode’ toolbar next to the navigation tools. Once the points are “fenced”, copy them into a new Working Model Space (Select –> Right Click –> Copy Fenced to new Model Space)

Figure 3: Fence Tools

3. Unify or Merge the clouds in the new MS (See Temporary Model Spaces Above)

[/wptabcontent]

[wptabtitle] MODELING A FLAT SURFACE: METHOD 1[/wptabtitle] [wptabcontent]

Modeling – Creating a patch to represent a wall or flat surface:

Method 1 – Fit to Cloud (More conservative):

  1. Draw a fence across flat surface using appropriate fence tool.
  2. Once the fence has been drawn, RC, select Point Cloud SubSelection – Add Inside Fence.
    1. Next, make sure there are no points BEHIND the flat surface that accidentally got selected. If there are, draw another fence around them, RC – Point Cloud SubSelection – Remove Inside Fence (may have to repeat multipe times)Once you have the flat surface properly selected, select Create Object –> Fit to Cloud –> Patch.

Figure 4: (Left) Selecting points in a dataset (Right) Rotating the dataset and removing the points that were erroneously selected behind or “through” the data

[/wptabcontent]

[wptabtitle] METHOD 2[/wptabtitle] [wptabcontent]Method 2 – Fit to Fenced:

  1. Draw a fence across flat surface using appropriate fence tool (making sure to not select any points behind the flat surface). RC –> Select Fit Fenced –> Patch.

Figure 5: Process of drawing a fence on a flat surface then simply right clicking and selecting Fit Fenced – Patch

[/wptabcontent]

[wptabtitle] METHOD 3[/wptabtitle]

[wptabcontent]Method 3 – Region Grow (Relies on Cyclone and placed parameters to decide how patch forms)

  1. Select a point that is at the center of the area or surface to patch (Selection Tool -> Left Click); Note: the Region Grow command evaluates the points in a radius from the point selected
  1. Create Object –> Region Grow –> Patch
    1. The Region Grow Dialogue Box allows the user to adjust the number of points that Cyclone uses to calculate the surface of the patch
  • Region Thickness – thickness/depth of point cloud data used for region; lowering this number increases accuracy; be aware of changes in the depth of surface materials such as the mortar between bricks when adjusting this number.
  • Maximum Gap to Span – maximum hole or shadow in cloud point data to “jump” and continue calculating the region
  • Angle Tolerance – used for meshes only
  • Region Size – the diameter radiating from the central pick point to calculate the region

Figure 6: (Left) Region Grow with maximum region size – the white points represent the proposed patch. (Right) Reducing the region size so that fewer points are considered for the proposed patch – note the radial pattern with the first selected point at the center.

[/wptabcontent]

[wptabtitle] CONTINUE TO…[/wptabtitle]

[wptabcontent]Continue to Leica Cyclone 7.0: Modeling – Editing, Extending, and Extruding Patches[/wptabcontent]
[/wptabs]

Posted in Cyclone, Modeling, Software, Workflow | Tagged , , , , ,

Z+F Laser Control: Filtering and Exporting Your Data

[wptabs style=”wpui-alma” mode=”vertical”] [wptabtitle] FILTERING[/wptabtitle]

[wptabcontent]

1. Once all of the scans have been color mapped, go through and verify the quality on all of them. As you are viewing the data in 3D, you will notice that there are extra scan points floating around in the air, as shown in the figure below. The data needs to be filtered before it is exported from Laser Control.

clip_image021

Figure 4: Extraneous data in the scan that will need to be filtered out

2.Open the Filter module by clicking on the Preprocessing button clip_image024. Choose all of the scans and then select the Mixed Pixel, Range and Single Pixel filters – use the default settings for each filter and then verify the results.

3. To see the results of the filter, open a scan in 2D View and RC in the viewing window and select View Masks. Click on the Scans Tab in the TOC and hit the plus next to each scan – each filter is color coded to match what you see in the viewing window. You can RC on each filter and choose to View/Hide or Remove it. Each filter can also be ran individually using the filter icons in the Filter Toolbar. clip_image026

[/wptabcontent]

[wptabtitle]EXPORTING YOUR DATA[/wptabtitle]

[wptabcontent]

4. Once you are satisfied with the results, you are now ready to export the data. To export the data, select File – Batch Convert to ZFS. The source should already point to the root file where all of the original ZFS files are stored and Destination is an Export subfolder within the root file location that will be created when the Start button is pressed. Be sure and check Use Mask to ensure that the filters are applied to the exported data and UNCHECK the Intensity Filter. Hit Start. All ZFS files in the root file are then converted. Here you can also export the data to ASCII (XYZ.ASC) and also PTS. For the ASCII file format, be sure to view the Options and select RGB data.

5. The data should now be ready to imported into other software such as Cyclone, Polyworks, or Rapidform.

[/wptabcontent] [/wptabs]

Posted in Laser Control, Scanning, Software, Workflow | Tagged , , , ,

Z+F Laser Control: Manually Idendtifying Feature Points

[wptabs style=”wpui-alma” mode=”vertical”]

[wptabtitle] MANUALLY IDENTIFYING FEATURE POINTS PT 1[/wptabtitle]

[wptabcontent]

Manually Identifying Feature Points

If the automatic Extract and Identify Feature Points operation did not work or if you need to add additional feature points to a scan to get better results (and less error), then you will need to manually add a few feature points to a scan and its associated images.

1. First, click the Passpair Definition button clip_image028 on the Color Toolbar. Immediately, you should see all of the points in the 2D View identified from that automatic feature point extraction. If you select an image from the TOC and open it, you will also see feature points in the image. The key to good color mapping is it have feature points in all/most of your images. When you have a scan that is a large %age sky then it can often be difficult for the automatic feature point extraction to execute. Therefore, you will want to place points in areas where you would like to see an improvement of the color mapping (areas that were not mapped well initially) and you also want a good distribution of points across all of your images where points can be easily identified.

2. By default when you open the Passpair Definition options the 2D Zoom Windows will automatically open for the scan in the 2D View and also for an image that you open. To effectively use the 2D zoom window in the 2D View (Gray View) change the Selection Mode from Rectangular to Point by clicking on the button several times until it changes from clip_image030 to clip_image032. [/wptabcontent]

[wptabtitle] MANUALLY IDENTIFYING FEATURE POINTS PT 2[/wptabtitle]

[wptabcontent]Now as you click and drag your mouse around the 2D View the view changes in the 2D View.

3. Now identify some prospective Feature points between a scan and its images. Drag the mouse in the 2D view to identify a Feature point location in the Zoom View – click on the point in the Zoom view and Name it (something that is easy to remember). Now identify the same point in the image view – Click on it (using the Zoom View) and give it the EXACT SAME NAME (copying and pasting the name often works best here). Now, check all of the images to see if the same point is identifiable in any other image, if so select the image, identify the point and give point the SAME NAME.

clip_image034

Figure 5: Process of identifying and naming manually identified points.

4. Rinse and repeat.

5. Once you have identified all of the manual feature points (3-10), click the calculate extrensics button clip_image037to calculate the external calibration file.

6. Note the error and Generate Color Scans to view the results. Verify the quality of the results and proceed as necessary.

[/wptabcontent]

[wptabtitle] CONTINUE TO…[/wptabtitle]

[wptabcontent] If you are satisfied with the quality of your results, continue to Z+F Laser Control: Filtering and Exporting Your Data[/wptabcontent] [/wptabs]

Posted in Laser Control, Scanning, Software, Workflow | Tagged , , , ,

Z+F Laser Control: Color Mapping

[wptabs style=”wpui-alma” mode=”vertical”] [wptabtitle] COLOR MAPPING PT 1[/wptabtitle]

[wptabcontent]

Z+F Laser Control Workflow for Color Mapping

1. Create a New Project.

2. Select all scans that you wish to import. If you forget a scan, do NOT Open File but instead Right Click (RC ) on the project name in the table of contents (TOC) and choose Add Scan to Project.

3. Make sure the appropriate toolbars are enabled by going to File – Options. Select Plugins on the left and enable the Color and Filter toolbars.

Note: If the Color Plugin does not appear and you are using a floating license, be sure the License.dat file is copied to the local user’s profile at the following location C\Users\Users Name\AppData\Roaming\ZF. Locate this .dat file in the same location within the folder of another active user

4. clip_image011Double click (DC) on a scan to open it in 2D View. To view a scan in full 3D, RC in the 2D window and choose Full Scan to 3D (or selection if applicable). Next, open the 3D View by going to Window – 3d View or select the 3D View button.

5. If the images did not import automatically with the scans, RC on the Scan Name(s) in the Project Tab and choose Add Picture from scan –. Repeat for all scans; you can also add images by opening the scan (double clicking it in the Project Tab) > go to the Color Plugin Tab > Select add images. READ THE PROMPT and confirm that the correct images are being imported/applied to the correct scan. 28 images should appear below the scan in the Project Tab

[/wptabcontent]

[wptabtitle]COLOR MAPPING PT 2[/wptabtitle]

[wptabcontent]

6. Next check the camera calibration file by clicking on the clip_image013Camera Calibration button or by going to Plugins – Color – Camera Calibration menu. Select the MCAM file from the USB drive that accompanies the Z+F Imager (file extension ext.xml). The MCAM file stores the intrinsic calibration file for the Z+F MCAM camera.  It is good to place a local copy of the MCAM file in your project file.

7. Next check the color Properties clip_image015. Make sure that “Map Fast” and “Use Mask” are unchecked. On the left, choose the scans that you wish to apply color mapping to. Note that the associated Greycards and pictures should also be selected. For export format, choose what is appropriate for your desired end product. While deriving the extrinsic calibration confirm that the appropriate scan and images are selected (click ‘Deselect all’ below both the scans and the images, then select the scan for which you are calculating the calibration. Once the scan is selected, the correct images populate)

Export Formats:
JPG and TIFF: Will create a separate image file that will be referenced by the XYZ data. Possibly useful for color correcting the panoramic image file.
ZFS: Creates an XYZRGB data set – As a default use the ZFS file format

Next choose the Camera Calibration file and apply whitebalance correction if desired. Click OK.

[/wptabcontent]

[wptabtitle] COLOR MAPPING PT 3[/wptabtitle]

[wptabcontent]

8. Now we are going to calculate the extrinsic calibration for the camera and associated images. The extrinsic calibration file calculates the subtle differences that occur from putting the camera on the scanner. In other words, whenever you mount the camera to the scanner, its position is going to vary slightly – these are the camera extrensics. Therefore once you calculate extrensics for a single scan, the same extrinsic file can be applied across multiple scans that have been acquired with camera in the same position (in other words the camera CANNOT have been removed from the scanner).

9. Now open a scan file and the run the Extract and Match Feature Points command. clip_image017 Click OK on the Automatic option to Calculate Extrinsics. The total resulting error should be less than 3 pixels, if it isn’t see process below to manually add more points or attempt to Extract and Match Points on another scan within the project. DO NOT overwrite the original MCAM file, rename it. NOTE: If insufficient points are extracted with the Coarse setting, try the Fine setting. Depending on the nature of the scan data, these settings may give higher or lower error, and one setting may extract sufficient points when the other setting will not.

10. Now that we have the extrinsic file, we can Generate Color Scans clip_image019 which actually uses the extrinsic camera calibration file to map the images onto the scan. Do this only for the scan that you used to calculate the feature points and then verify the color mapping quality. If you are satisfied with the color mapping quality, you can then run the Generate Color Scans clip_image019[1] command for the entire project. If the color mapping results are not adequate, then complete the steps in the next two tabs to manually add more feature points for the color mapping process.

[/wptabcontent]

[wptabtitle]CONTINUE TO…[/wptabtitle]

[wptabcontent]

If you are satisfied with the automated color mapping quality and have generated your Color Scans, you can continue to Z+F Laser Control: Filtering and Exporting Your Data

If the color mapping results are not adequate, then continue to Z+F Laser Control: Manually Identifying Feature Points to manually add more feature points for the color mapping process.

[/wptabcontent] [/wptabs]

Posted in Laser Control, Scanning, Software, Workflow | Tagged , , , ,

Z+F Laser Scanner: Starting your Scan

This workflow will show you how to start a scan using the Z+F Laser Scanner.
Hint: You can click on any image to see a larger version.

[wptabs style=”wpui-alma” mode=”vertical”] [wptabtitle] POSITIONING AND POWERING UP THE SCANNER[/wptabtitle] [wptabcontent]

1. Position the scanner in the center of the target field

Ensure that the angle of incidence of the laser on the target is larger than 45 degrees

 

Recommended distance between the Z+F scanner and targets

RESOLUTION RECOMMENDED DISTANCE WITH AN ANGLE OF 90 deg.
Middle 1 m to 15 m
High 1 m to 20 m
Super High 1 m to 25 m
Ultra High 1 m to 30 m

With smaller a smaller angle of incidence the target distance is reduced

Power the scanner on by pressing the power button for 0.3 seconds

The power-up process takes approx. 20 seconds in which time it will rotate while the mirror spins

[/wptabcontent]

[wptabtitle] MENUS AND CONTROLS[/wptabtitle] [wptabcontent]

Menus and Controls

System Menu shown in the display. Main menu order:

· Info

· Status

· Tilt Sensor

· Scanning

· Data Management

Control buttons:

clip_image004[/wptabcontent]

[wptabtitle]SCAN MENU SYMBOLS[/wptabtitle]

[wptabcontent]Scan Menu Symbols

The top line indicates the menu currently in use

clip_image006

See User Manual for full descriptions of all menus

Check the Tilt Sensor Menu

If the inclination in f the Y-direction is greater than or less than 2 degrees, an arrow will appear and the inclination should be corrected[/wptabcontent]

[wptabtitle] THE SCANNING MENU[/wptabtitle] [wptabcontent]In the Scanning Menu

RESOLUTION:

Select the preferred resolution level (Middle or High is usually preferred)

Super High, High and Middle resolutions have a low noise option that reduces noise by the factor of 1.4, but scanning time is doubled

VERTICAL SCAN RANGE:

You can select predefined vertical scan ranges using the + / – buttons or further refine each predefined range by pressing the clip_image007 button and increasing or decreasing the values by 5 deg. with the + / – buttons. Predefined ranges include:

· V 0-360

· V 25-160 and 200-335

· V25-180

· V 180-260

· V 45-135

HORIZONTAL SCAN RANGE:

The horizontal scan range can be set in increments of 5 deg. for both the start and end position using the + / – buttons and pressing the clip_image008 button to confirm the position

Alternatively the scanner start position can be manually set by rotating the scanner to any position greater than 0.0. Press the clip_image009 button to confirm the position. Set the horizontal range end position next in the same manner[/wptabcontent]

[wptabtitle] START THE SCAN[/wptabtitle]

[wptabcontent]Start the Scan

Press the clip_image010 button to start the scanning process. The process can also be stopped using this button and all data collected prior to interruption will be stored.
[/wptabcontent]

[/wptabs]

Posted in Hardware, Scanning, Uncategorized, Workflows, Z+F 5006i | Tagged , , , , , , ,

Leica C10: Exporting Your Scan Data

This workflow will show you how export your scan data from the Leica C10 Laser Scanner.
Hint: You can click on any image to see a larger version.

[wptabs style=”wpui-alma” mode=”vertical”]

[wptabtitle] Exporting data from the C10[/wptabtitle]

[wptabcontent] Transferring Data from the C10

Place a USB stick in the USB Slot. IMPORTANT: The C-10 operating system does not have virus protection.  It is absolutely essential to use a clean USB stick and/or laptop anytime you are connecting to the scanner. It is highly recommended that you dedicate a single USB stick to the scanner and that you always use a clean computer to avoid infecting the scanner.  If you are using the USB stick with multiple computers, scan it for infections before inserting it into the scanner.The first time you use a new stick it will take ~30secs to recognize the new device.

The highlighted area will show you the USB Stick is ready to use. Click into the ‘Tools’ Menu.

Click ‘Transfer’

Click ‘Projects’clip_image046[8]

The project you have just been working in will be displayed. Hit ‘Cont’ to copy the data to the stick.

You will then see the progress bar indication time left to transfer. Once finished click out to the Main menu using the ‘x’ button.

Click the ‘x’ button
[/wptabcontent]

[wptabtitle] Turning the C10 off[/wptabtitle]
[wptabcontent] Click ‘Yes’ should you wish to turn the scanner off

The scanner will shutdown in ~5 seconds


[/wptabcontent]

[wptabtitle] Copying data to the server or computer[/wptabtitle]

[wptabcontent] Copy data to server or local location for processing by copying the project from Scanner-Projects Folder.  Confirm that the .ini files are included in this project folder.  This entire project folder must be copied locally wherever you are processing the data.
[/wptabcontent]

[wptabtitle] Deleting Data from the C10[/wptabtitle]

[wptabcontent] Once the data has been backed up/transferred to a server or localized location, delete the data from the scanner and from the USB stick.  To delete project/data from the scanner, click ‘Manage’ at the main menu.  Select project and ‘Del’ tab.[/wptabcontent] [/wptabs]

Posted in Leica C10, Scanning, Workflows | Tagged , , , , , , , , ,

Leica C10: Starting a Scan

This workflow will show you how to start a scan using the Leica C10 Laser Scanner.
Hint: You can click on any image to see a larger version.



[wptabs style=”wpui-alma” mode=”vertical”] [wptabtitle] START SCANNING![/wptabtitle]
[wptabcontent]First, choose a field of view and resolution appropriate for your project.

If you are interested in collecting scan data and RGB/images, hit ‘Sc+Img.’

If you are only interested in collecting scan data, hit ‘Scan’ instead. You will now see the progress bar on the side of the instrument.

Once finished the scanner will return to this dialogue.

Hit ‘ScWin’ to view the collected scan-data.

[/wptabcontent]

[wptabtitle] PREVIEW THE SCAN DATA[/wptabtitle]

[wptabcontent] Here you can see the scan data – click the ‘x’ to return to the previous menu.

[/wptabcontent]

[wptabtitle] CREATE A TARGET IN THE SCENE[/wptabtitle]

[wptabcontent] Click the button highlighted to go to the sub-menu. Click ‘Target’.

Click ‘New’ to create a new target in the scene.

Click into the ‘Target ID’ box and use the keyboard to input a target name – hit ‘Ent’. A recommendation would be to use ‘HDS1’, then ‘HDS2’ and so on if using more than one target in a scene.

 [/wptabcontent]

[wptabtitle] ENTER THE TARGET HEIGHT[/wptabtitle] [wptabcontent] If you have set up the target over a known point you can enter a target height. Click ‘PickT’.

 
[/wptabcontent]

[wptabtitle] LINE UP THE SCANNER WITH THE TARGET[/wptabtitle]

[wptabcontent] You will now see the live video feed – move the scan-head manually to roughly line up with the target. Using the video feed use the ‘SEEK’ tool to pick the centre of the target – use the zoom buttons to try to be as accurate as possible

Hit return, this will return you to the Targeting menu. Hit ‘Cont’. The scanner will now scan the target – this should take between 10-30 seconds.

 

Once completed you will see the following dialogue. Click ‘View’ to see the target

[/wptabcontent]

[wptabtitle] STORE AND CHECK THE TARGET[/wptabtitle]

[wptabcontent]VERY IMPORTANT: Hit ‘Store’ to ensure the target is copied into the database, the target list will not show the targets but it has been stored.

You can now check that the vertex is in the centre of the target. Hit ‘x’ to return to the previous menu

[/wptabcontent]

[wptabtitle] YOU’VE SCANNED STN1![/wptabtitle]

[wptabcontent] Congratulations – You have now finished scanning at STN1! Time to move to STN2.

Using the ‘x’ button move back to the Scan Parameter window
[/wptabcontent]

[wptabtitle] STARTING A SECOND SCAN STATION[/wptabtitle]

[wptabcontent]IMPORTANT: To set up STN2, click the ‘StdStp’ tab, which takes you to the scan parameters screen.  DO NOT click ‘Cont’ as this will add more data into STN1. (If you are adding data to an existing station, a message will appear stating the Station Number and Current Station Information/Parameters – see below.  Click no to avoid adding data to the previous station.)

 

[wptabtitle] VIEWING ALL SCAN STATIONS IN A PROJECT[/wptabtitle]

[wptabcontent] Clicking the Setup tab will display the project name and will show all stations collected so far.  Recording each station on the field sketch and noting the station number in the setup tabs help to avoid confusion and overwriting data.

 

To continue scanning and acquiring data at STN2, repeat steps starting from the’Start Scanning,’

OR Continue to the next part in the series to transfer data to a USB Stick
[/wptabcontent]

[wptabtitle] CONTINUE TO…[/wptabtitle]

[wptabcontent] Continue to part 4 in the series,Leica C10: Exporting your scan data. [/wptabcontent] [/wptabs]

Posted in Leica C10, Scanning, Workflows | Tagged , , , , , , , , ,

Leica C10: Starting a New Project

This workflow will show you how to create a new project prior to scanning with the Leica C10 Laser Scanner.
Hint: You can click on any image to see a larger version.

[wptabs style=”wpui-alma” mode=”vertical”]

[wptabtitle] Start a new project[/wptabtitle]

[wptabcontent] From the main menu click the ‘Manage’ icon.

Click ‘New’ to start a new project

Click into the ‘Name’ box and a keyboard will appear. Enter your desired project name and click ‘Ent’.

Hit ‘Store’ to store the project folder

[/wptabcontent]

[wptabtitle] SET UP A NEW SCAN STATION[/wptabtitle]

[wptabcontent]You will now see your project highlighted in the ‘Project list’. Click ‘Cont’.

This will return you to the main menu – click on the ‘Scan’ icon.

clip_image014[6]

Click ‘NewSt’ to create a new station setup.  Confirm your project is listed.  Choose ‘StdStp’ (Standard Setup highlighted in magenta below).

 

[/wptabcontent]

[wptabtitle] SELECT PRESETS[/wptabtitle]

[wptabcontent]Click the drop down arrow to the right of the ‘Presets’ menu

Select the appropriate preset or enter a custom field of view.

Preset                    Horizontal FoV°           Vertical FoV°
Custom View                User defined                 User Defined
Quick Scan                   Defined by QS aiming       135(-45- +90)
Rectangle 60×60                 60                                    60
Rectangle 90×90                 90                                    90
Rectangle 360×60              360                                  60
Rectangle 360×90              360                                  90
Target all                            360                                  270[/wptabcontent]

[wptabtitle] SET RESOLUTION[/wptabtitle]

[wptabcontent]Click into the ‘Resolution’ tab and choose the resolution desired. We are initially recommending a 7 minute ‘Medium Res’ Scan.

Preset             Horizontal Spacing       Vertical Spacing          Range
custom res        user defined                         user defined                  user defined
low res                    .2m                                       .2m                                    100m
medium res            .1m                                       .1m                                    100m
high res                  .05m                                     .05m                                  100m
highest res            .02m                                     .02m                                  100m

Once you have chosen your resolution you will see an indication of the level of detail of your results. (At Medium Resolution this will give you a point spacing of 10mm at 10m, 20mm at 20m and so on).

[/wptabcontent]

[wptabtitle] RGB DATA[/wptabtitle]

[wptabcontent]If RGB data is important, it is recommended that you manually set the exposure.  Click on the Image Ctrl Tab.  Set Exposure to Manual.  Click on ChkExp tab at bottom of screen to check the exposure.  Manually turn the scanner around the field of view, adjusting the exposure with the pull tab on the right.  Once the exposure is correct, click the return key and note the value of the Time (ms).  Use this value for subsequent scans in the area.  Adjust this number if lighting conditions or locations change.[/wptabcontent]

[wptabtitle]CONTINUE TO…[/wptabtitle]

[wptabcontent]Continue to part 3 in the series, Leica C10: Starting a Scan[/wptabcontent][/wptabs]

Posted in Leica C10, Scanning, Uncategorized, Workflows, Workflows | Tagged , , , , , , , , ,

Getting Started in the GMV


These slides will show you how to get started using the GMV. 

Hint: You can click on any image to see a larger version. 

[SlideDeck id=’4314′ width=’100%’ height=’350px’]

Posted in Uncategorized

Prezi Example

[prezi width=840 height=640 id=’http://prezi.com/q7jk84mui5ft/gmv-introduction-draft/’]

 

Posted in Uncategorized

Rapidform: Digitizing Using the Spline Tool

These tutorials will show you how to digitize archaeological features from terrestrial scan data.
Hint: You can click on any image to see a larger version.
 

Drawing with the spline tool

In some cases using the pencil tool is not convenient. Cases include places where there are unhealed holes in the mesh and places where individual points need to be places in close proximity to one another in an irregular pattern. Any drawing will likely involve both the pencil and spline tools.

[wptabs style=”wpui-alma” mode=”vertical”] [wptabtitle] THE SPLINE TOOL[/wptabtitle] [wptabcontent]Select the “Spline” tool from the “3d Mesh Sketch” Menu

clip_image061

Fig. 31: The Spline tool[/wptabcontent]

[wptabtitle] NAVIGATE WITH THE SPLINE TOOL[/wptabtitle] [wptabcontent]As with the pencil tool, zoom in and navigate through the mesh to obtain a good view of the area you are digitizing. Place individual vertices at the desired locations on the mesh. The current vertex will be circled. Both vertices and the polyline will be drawn as they are created.

clip_image063

Fig. 32: Drawing a spline on a mesh.[/wptabcontent]

[wptabtitle] RENDERING DURING NAVIGATION[/wptabtitle] [wptabcontent]The mesh will be rendered as a point cloud during navigation.

clip_image065

Fig. 33: Navigating while in spline drawing mode.[/wptabcontent]

[wptabtitle] CHECK SPLINE ALIGNMENT[/wptabtitle] [wptabcontent]Zooming out provides an opportunity to check that new features align well with previously created ones, and that the overall interpretation is coherent.

clip_image067

Fig. 34: Checking the coherence of a group of splines. [/wptabcontent]

[wptabtitle] COMMIT EDITS[/wptabtitle] [wptabcontent]Commit edits to the “Spline” tool by clicking on the “Check” box.

clip_image069

Fig. 35: Committing edits for the spline tool. [/wptabcontent]

[wptabtitle] ADD MORE SPLINE SEGMENTS[/wptabtitle] [wptabcontent]To add further continuous segments, continue to use the “Spline” tool. Hover the mouse over the endpoint and it will be highlighted with a dashed circle around it. Clicking will snap the first point of a new polyline to the highlighted endpoint.

clip_image071

Fig. 36: Snapping with the spline tool. [/wptabcontent]

[wptabtitle] SPLINE TOOL VS. PENCIL TOOL[/wptabtitle] [wptabcontent]Comparing the Pencil and Spline tools

The Spline Tool

-Advantages: Using the spline tool you can place vertices in meaningful locations and have more control over the form of the final vector.

-Disadvantages: This method is slower than using the pencil tool and requires frequent navigation in the point cloud to ensure you are picking good points for vertex placement.

The Pencil tool

-Advantages: Significantly faster drawing and snapping to faces on the mesh, continuous point generation.

-Disadvantages: If there are holes in your mesh, the vector created by the pencil can more easily ‘fall through’ those holes, creating wild loops.


[/wptabcontent]

[wptabtitle] ADDING ANNOTATIONS[/wptabtitle] [wptabcontent]To insert text annotations into the model, enter the 3D sketch mode by selecting the 3D Sketch button from the main menu.

Then select the Text button.

Left click at the desired location to insert the text. Adjust options for size and font as desired.

n.b. The Rapidform freeviewer provides more flexible options for inserting annotations. It may be preferable to complete this step of the process using the freeviewer. More information on annotating models can be found in the Rapidform Freeviewer workflow.

clip_image073

Fig. 37: Adding text annotations.

[/wptabcontent] [/wptabs]

Posted in Workflow, Workflow | Tagged , ,

Rapidform: Digitizing Using the Pencil Tool

These tutorials will show you how to digitize archaeological features from terrestrial scan data.
Hint: You can click on any image to see a larger version.
 

[wptabs style=”wpui-alma” mode=”vertical”][wptabtitle] THE 3D MESH SKETCH TOOLBAR[/wptabtitle] [wptabcontent]Select the 3D Mesh Sketch toolbar.

clip_image045

Fig. 23: The 3D Mesh Sketch Menu.

And then select the “Pencil” tool

clip_image047

Fig. 24: The Pencil button.[/wptabcontent]

[wptabtitle] TOGGLE BETWEEN NAVIGATION AND DRAWING[/wptabtitle] [wptabcontent]Click the center mouse button to toggle between Navigation and the Pencil tool. clip_image049
[/wptabcontent]

[wptabtitle] DRAWING WITH THE PENCIL TOOL[/wptabtitle] [wptabcontent]Zoom in and navigate so that you can clearly see the polyfaces where you wish to place a boundary. The selected polyface will be highlighted in yellow.

clip_image051

Fig. 26: Drawing with the pencil tool.

Think carefully about whether you want to place boundaries on ‘top’ of an edge, in ‘front’ of an edge or on the edge itself. [/wptabcontent]

[wptabtitle] WILD LOOPS![/wptabtitle] [wptabcontent]clip_image053

Fig. 27: Pencil tool errors include the creation of undesired loops in polylines.

Be aware of holes remaining in the mesh. Drawing across a hole in the mesh with the pencil tool will result in a ‘wild loop’ where the polyline falls through the hole. [/wptabcontent]

[wptabtitle] DRAGGING POLYLINE VERTICES[/wptabtitle] [wptabcontent]clip_image055

Fig. 28: Dragging polyine vertices.

When you have completed drawing a polyline surrounding a ‘closed’ shape, you may need to drag one of the endpoints to snap to the start of the polyline. Select the endpoint by left clicking on it and drag until the target point is highlighted.[/wptabcontent]

[wptabtitle] CHECK YOUR POLYLINES[/wptabtitle] [wptabcontent]clip_image057

Fig. 29: Checking a polyline-mesh alignment.

Check your polyline from multiple viewpoints to make sure it intersects the desired polyfaces in the mesh.[/wptabcontent]

[wptabtitle] ADJUST YOUR POLYLINES[/wptabtitle] [wptabcontent]You can alter the polyline by selecting individual nodes and dragging along the mesh.

clip_image059

Fig. 30: Adjusting a polyline to match the mesh.

n.b. Be aware that when you drag the endpoint, or any vertex, on a polyline, you alter the shape of the entire line, and slightly shift all the vertices. Be sure to check that the overall placement of the polyline is satisfactory after making adjustments to any given vertex.[/wptabcontent]

[wptabtitle] Continue to the Spline Tool[/wptabtitle] [wptabcontent]In some cases using the pencil tool is not convenient. Cases include places where there are unhealed holes in the mesh and places where individual points need to be places in close proximity to one another in an irregular pattern. Any drawing will likely involve both the pencil and spline tools.

Continue to the Drawing with the Spline Tool.

[/wptabcontent][/wptabs]

Posted in Workflow, Workflow | Tagged , ,

Rapidform: Mesh Cleaning

These tutorials will show you how to digitize archaeological features from terrestrial scan data.
Hint: You can click on any image to see a larger version.
 

[wptabs style=”wpui-alma” mode=”vertical”] [wptabtitle] AN UNCLEAN MESH[/wptabtitle] [wptabcontent]If the mesh has many holes, as seen in the one below, it is worthwhile to repair the mesh.

clip_image035

Fig. 18: An unclean mesh sporting holes and manifold faces.[/wptabcontent]

[wptabtitle] ENTER THE MESH EDITING MODE[/wptabtitle] [wptabcontent]Push the “Mesh” button to enter the mesh editing mode.

clip_image037

Fig. 19: The Mesh mode button[/wptabcontent]

[wptabtitle] THE FILL HOLES BUTTON[/wptabtitle] [wptabcontent]Select the “Fill Holes” button.

clip_image039

Fig. 20: The Fill Holes button[/wptabcontent]

[wptabtitle] FILL HOLES PARAMETERS[/wptabtitle] [wptabcontent]Left click and drag to select the area for which you wish to fill holes. Adjust the options in the menu on the left to exclude any holes which should persist. The “Do Not Fill N-Biggest Holes” and “Do Not Fill If Hole is Bigger Than” options are particularly useful. Holes which will be filled are highlighted in bright blue.

clip_image041

Fig. 21: The Fill Holes parameters menu [/wptabcontent]

[wptabtitle] THE HEALING WIZARD[/wptabtitle] [wptabcontent]If RapidForm is unable to fill some of the holes, try applying the “Healing Wizard”, doing some manual cleaning of the mesh, and then try filling the holes again.

clip_image043

Fig. 22: The Healing Wizard Menu.

Tiny defects in the mesh, like the one shown highlighted in green, prevent holes from being closed.

n.b. Making a clean and watertight mesh can be a time consuming process, especially for complex, natural forms. If your project’s deliverables do not include the mesh, it may be pragmatic to allow some imperfections to remain and to work around them while digitizing. If the mesh is part of the project’s deliverables, be sure to budget sufficient time for this step in the process.

[/wptabcontent]

[wptabtitle] CONTINUE TO…[/wptabtitle] [wptabcontent]You can now continue to Digitings using the Pencil Tool in Rapidform[/wptabcontent]
[/wptabs]

Posted in Workflow, Workflow | Tagged , ,

Rapidform: Basic Workflow to start a digitizing project Part 2

(AKA The Meshing in Rapidform Tutorial)

These tutorials will show you how to digitize archaeological features from terrestrial scan data.
Hint: You can click on any image to see a larger version.
 

[wptabs style=”wpui-alma” mode=”vertical”] [wptabtitle] DELETE POINT CLOUD OUTLIERS[/wptabtitle] [wptabcontent]

Begin by cleaning any obviously erroneous noise from the dataset. Left click and drag to select outliers and press “Delete”.

clip_image023

Fig. 12: Obtaining the correct alignment.[/wptabcontent]

[wptabtitle] THE GENERATE MESH BUTTON[/wptabtitle] [wptabcontent]Select the “Point Cloud” button and then the “Generate Mesh” button from the point cloud menu.

clip_image025

Fig. 13: The Point Cloud button[/wptabcontent]

[wptabtitle] MESH THE WHOLE DATASET[/wptabtitle] [wptabcontent]Generate a mesh for the entire dataset by left clicking and dragging to select the entire point mesh.

clip_image027

Fig. 14: Whole model mesh construction.[/wptabcontent]

[wptabtitle] MESH A SUBSET OF THE DATASET[/wptabtitle] [wptabcontent]OR generate a mesh for a subset of the dataset by only selecting that area, either manually

clip_image029

Fig. 15: Limited area mesh construction.[/wptabcontent]

[wptabtitle] SUBSET USING THE VIEW CLIP BOX STEP 1[/wptabtitle] [wptabcontent]OR using a View Clip Box.

To use a View Clip Box, exit the “Construct Mesh” menu and select “View” and then “View Clip”.

clip_image031

Fig. 16: The View Clip tool.[/wptabcontent]

[wptabtitle] SUBSET USING THE VIEW CLIP BOX STEP 2[/wptabtitle] [wptabcontent]Select “Inside Box” and adjust the position and size of the box by clicking on the blue arrows and nodes (which turn yellow when selected). Press the “check” button to commit changes to the View Clip.

clip_image033

Fig. 17: Interactive view clipping.

Then return to the “Construct Mesh” menu to build the local mesh.[/wptabcontent]

[wptabtitle] CONTINUE TO…[/wptabtitle] [wptabcontent]If your mesh is especially dirty, continue to Mesh Cleaning in Rapidform.

Otherwise continue directly to Digitizing Using the Penicl tool in Rapid Form.[/wptabcontent]
[/wptabs]

Posted in Workflow, Workflow | Tagged , ,

Rapidform: Basic Workflow to start a digitizing project Part 1

 

These tutorials will show you how to digitize archaeological features from terrestrial scan data.
Hint: You can click on any image to see a larger version.

[wptabs style=”wpui-alma” effect=”slide” mode=”vertical”] [wptabtitle] START A PROJECT[/wptabtitle] [wptabcontent]-Create a new project by clicking “File” and “New” -Import external scan points by going to “Insert” and “Import”

clip_image010
Fig. 5: Import Data into Rapidform

[/wptabcontent]
[wptabtitle] SELECT A FILE TO IMPORT[/wptabtitle] [wptabcontent]-Select the file containing scan data which you wish to import. For a list of valid file types, open the “Files of Types” dropdown menu.

clip_image012

Fig. 6: Supported file types are listed in the import menu dropdown.

[/wptabcontent]
[wptabtitle] IMPORT ONLY OR MESH ON IMPORT?[/wptabtitle] [wptabcontent]Decide whether or not you want to mesh your data now. If you want to inspect your data before meshing, select “Import only”. If this is the first time you’re working with the dataset, “Import only” is probably a good idea.

[/wptabcontent]

[wptabtitle] SET THE SCAN RANGE[/wptabtitle] [wptabcontent]Set the Valid Scan Range to zero to import all the data.

clip_image014

Fig. 7:Setting the Valid Scan Range.

[/wptabcontent]

[wptabtitle] CONFIRM THE SCALE AND THE UNITS[/wptabtitle] [wptabcontent]Confirm that your data is imported at the correct scale and make any necessary adjustments by setting the “Unit” value.

clip_image015

Fig. 8: Input values should always be checked at the start of a project.

[/wptabcontent]

[wptabtitle] CHECK THE ALIGNMENT[/wptabtitle] [wptabcontent]Check the alignment of the scan data and adjust if necessary. A common adjustment is forcing Z to be up. Adjusting the alignment will allow you to use the “Viewport” buttons intuitively because the “Top” viewport will show your data from the top.

clip_image017

Fig. 9: The scan is misaligned when first imported. The front perspective shows a view from the top in this figure.

[/wptabcontent]

[wptabtitle] INTERACTIVE ALIGNMENT WIZARD STEP 1[/wptabtitle] [wptabcontent]Looking at the scan data from the Front perspective the model is incorrectly aligned.

Use the interactive alignment wizard to change this by going to “Tools” and choosing “Align” and “Interactive alignment”.

clip_image019

Fig. 10: The Interactive Alignment tool.

[/wptabcontent]

[wptabtitle] INTERACTIVE ALIGNMENT WIZARD STEP 2[/wptabtitle] [wptabcontent]Select and move the x, y, and z axes in the left-hand window to adjust the alignment of the scan data. Your adjustments will be reflected in the right-hand window. When you are satisfied with your changes, click the “check” button.

clip_image021

Fig. 11: Navigation in the Interactive Alignment tool.

[/wptabcontent]

[wptabtitle] CONTINUE TO GENERATING A MESH[/wptabtitle] [wptabcontent]Now that your scan data is properly imported, scaled and aligned you are ready to generate a mesh. Continue to Generating a Mesh in Rapidform.

[/wptabcontent] [/wptabs]

Posted in Uncategorized, Workflow, Workflow | Tagged , ,

Subsetting Meshes in Rapidform

 

  • This tutorial uses RapidForm XOR
  • Hint: Click on any image to view a larger version.
[wptabs style=”wpui-alma” mode=”vertical”] [wptabtitle]SUBSETTING MESHES IN RAPIDFORM[/wptabtitle]
[wptabcontent]Part of the GMV Rapidform Modeling Series


[/wptabcontent]
[wptabtitle]ACHIEVING MANAGEABLE FILE SIZES[/wptabtitle]
[wptabcontent]
-The meshes produced through terrestrial laser scanning and close range photogrammetry can be very large.

-To facilitate working with the data, it is sometimes useful to subset or reduce the density of the dataset. The desired dataset size might be described through the polyface count, number of vertices, or file size per model.

-Subsetting a mesh will reduce the size of each individual file while maintaining the full resolution of the data, while decimating will reduce the data resolution but avoids splitting the data across several files.
[/wptabcontent]
[wptabtitle]THE SPLIT COMMAND[/wptabtitle]
[wptabcontent]Subsetting a mesh in Rapidform is simple.

– Enter the Mesh Mode by double clicking on the mesh you would like to subset in the left-hand menu tree.

– From the top menu select the Split command.


[/wptabcontent]
[wptabtitle]CHOOSE A SPLIT METHOD[/wptabtitle]
[wptabcontent]
In the Split command sub-menu select Method – By User Defined Plane. Select the base plane that will split your mesh nicely. Under Plane Options go to Base Plane.

[/wptabcontent]
[wptabtitle]ROTATE THE PLANE TO SPLIT THE MESH[/wptabtitle]
[wptabcontent]
– Move and Rotate the plane until it is in the proper position to split the mesh.


[/wptabcontent]
[wptabtitle]SPLIT THE MESH[/wptabtitle]
[wptabcontent]
– The Dialog will list the remaining region and it will be highlighted on the screen. Hit the continue arrow to split the mesh


[/wptabcontent]
[wptabtitle]SELECT THE REMAINING REGION[/wptabtitle]
[wptabcontent]
– Select the Remaining Region which will appear highlighted in blue (or yellow if you hover over it with the mouse) and hit the check button to accept the results.


[/wptabcontent]
[wptabtitle]EXPORT THE RESULTS[/wptabtitle]
[wptabcontent]
– Export the resulting section of the mesh as a new file to save the result of the process.

Congratulations! You’re Done!
[/wptabcontent] [/wptabs]

 

Download this tutorial as a pdf.
[iframe src=”https://gmv.cast.uark.edu/wp-content/uploads/2012/02/rapidform_arch_plans.pdf” width = “800px” height = “500px”]

Posted in Workflow, Workflow | Tagged , , ,

Optocat Project Templates

Having successfully set up the Breuckmann SmartSCAN HE as described in the Breuckmann SmartSCAN Setup Workflow, you are now ready to set up the parameters for your project and save them as a template.

BEGINNING YOUR OPTOCAT PROJECT:
[wptabs style=”wpui-alma” mode=”vertical”]

[wptabtitle] INITIALIZE PROJECT[/wptabtitle] [wptabcontent]1. Initialize a Project by clicking in the main left-hand menu on Scan>Contour Matching> Initialization (assuming you are not using the turntable in your project). A pop-up menu will appear where you can create a new project and select the directory where it will be stored, or you can initialize an existing project.

clip_image002
[/wptabcontent]

[wptabtitle] CREATE TEMPLATE[/wptabtitle]

[wptabcontent]2. Once you have created or initialized a project a second pop-up menu will appear providing options for using a template for the project settings. Select the Full template, or a custom template you have created.

3. In the third pop-up menu, select the lens size you are using from the scanner serial number dialogue box. The lens size is appended to the end of the scanner serial number.

clip_image004[/wptabcontent]

[wptabtitle] POP UP MENU[/wptabtitle]
[wptabcontent]4. A fourth pop-up menu will now appear with a series of tabs along the top. Work your way through the tabs checking that the options are set as described below. If the parameters of an option are not specified, leave the defaults.[/wptabcontent]

[wptabtitle] CAPTURE TAB[/wptabtitle]
[wptabcontent]In the Capture tab set the Averaging to reduce noise option to 8 (Standard) and check the box next to Flickerless Shutter if you are working under fluorescent lights.

clip_image006[/wptabcontent]

[wptabtitle] MASKING TAB[/wptabtitle] [wptabcontent] In the Masking tab set Reliability to More Data.

clip_image008[/wptabcontent]

[wptabtitle] FLITER TAB[/wptabtitle] [wptabcontent] In the 2D Filter tab select No Filter
[/wptabcontent]

[wptabtitle] TRIANGULATION TAB[/wptabtitle]
[wptabcontent]In the Triangulation tab set 2D-Subsampling to Full Resolution and set 3D-Mesh Compression to Lowest.

clip_image010[/wptabcontent]

[wptabtitle] 3D FILTER TAB[/wptabtitle] [wptabcontent] In the 3D Filter tab set the Standard Filter size to 3x Pixelsize, set Cycles to 3, and leave all other parameters as the defaults.

clip_image012

In the Processing tab, leave the defaults.[/wptabcontent]

[wptabtitle] ALIGN TAB[/wptabtitle]
[wptabcontent] In the Align tab set Reliability to Ignore Reliability (counter-intuitive, we know!).

clip_image014 [/wptabcontent]

[wptabtitle] TEXTURE TAB[/wptabtitle]
[wptabcontent] In the Texture tab check the boxes next to Use Texture and Use Color and set your lighting condition appropriately.

clip_image016[/wptabcontent]

[wptabtitle] INFORMATION TAB[/wptabtitle]

[wptabcontent]
In the Information tab, fill in the metadata for your project.

clip_image018[/wptabcontent]

[wptabtitle] OPTIONS TAB[/wptabtitle]

[wptabcontent] In the Options tab check the Mains Frequency and pick the correct setting for the country in which you are working.

clip_image020[/wptabcontent]

[wptabtitle] ADVANCED TAB[/wptabtitle]
[wptabcontent] In the advanced tab, leave the defaults.
Click OK to accept your parameters for the project[/wptabcontent]

[wptabtitle] SAVE SETTINGS[/wptabtitle]
[wptabcontent]5. In the main menu select File and Save Settings as Template.[/wptabcontent]
[/wptabs]

Posted in Uncategorized, Workflow | Tagged , ,

Comparing 3D models in Rapidform

 

I. Introduction

A. Comparing two meshes is a common task in 3d modeling applications. A model created from laserscanning data might be compared with a model created from photogrammetric data to assess relative accuracy. A building’s façade might be documented with a laserscanner at six month intervals to monitor erosion by comparing the meshes and individual profiles over time. This workflow demonstrates how to carry out a comparison in Rapidform XOR.

II. Aligning Models

A. Models with surveyed targets

1. In the Main Menu in Rapidform go into Tools > Scan Tools > Align between Scan Data.

clip_image002

Fig. 1: Align between Scan Data Menu

2. Select the Local Based on Picked Point method and select your Reference and Moving targets- the two meshes you are trying to align.

3. Check the box to Refine Alignment.

clip_image004

Fig. 2: Select reference and moving meshes

4. Zoom in close to each target and select it, first in the Reference and then in the Moving model. Right click to toggle between zooming and selection modes. Be sure to select the center of the target as accurately as possible. A colored pin will appear showing the picked center of each target

clip_image006

Fig. 3: Pick point on corresponding targets in the moving and reference meshes.

5. Once all the targets have been matched, you can visually review the goodness of the fit between the models by looking at the distance between the pins shown in the left hand window in this menu. When you are satisfied, check the Align Between Scan Data box to complete the process. The alignment will be refined.

clip_image008

Fig. 4: When the meshes are aligned, complete the transformation.

B. Models without targets

1. Aligning models without targets is inherently less accurate than alignments done based on surveyed targets because points on natural surfaces must be matched. Because the natural surfaces -as they are modeled- have slight local deviations, the algorithms used for fitting the models together can encounter problems.

2. The process is essentially the same as aligning models with surveyed target, but with the additional requirement of selecting good targets on natural surfaces to achieve a closely aligned match. Follow the instructions in A1, A2 and A3.

3. Select points with high local curvature – edges and corners ideally- or intersections of linear features. Try to pick at least six points. When you are satisfied, check the Align Between Scan Data box to complete the process. The alignment will be refined.

clip_image010

Fig.5: On natural surfaces pick points on well defined features.

III. Assessing Differences between two Meshes

A. Mesh Deviations

1. In the Main Menu in Rapidform, select Measure > Mesh Deviations.

2. Select the meshes you are interested in as the Target Entities.

clip_image012

Fig. 6: The Mesh Deviations Menu

3. On the right hand side of the screen, set the Allowable Tolerance as the acceptable deviation between the meshes. Adjust the Color Bar to reflect the expected range of deviations between the meshes. Under Type select the appropriate deviation for your application, Max. Deviation is recommended for erosion monitoring. Click the ! under Deviation Option to recalculate the range.

clip_image014

Fig. 7: Properties of a mesh deviations map

4. Click the right arrow button in the Mesh Deviations menu to show the deviation overlaid on the mesh

clip_image016

Fig. 8: Mesh deviations displayed over the mesh.

Posted in Uncategorized, Workflow | Tagged , , , , , ,

Subsetting and Decimating Meshes in Rapidform XOR

Achieving manageable file sizes

The meshes produced through terrestrial laser scanning and close range photogrammetry can be very large. To facilitate working with the data, it is sometimes useful to subset or reduce the density of the dataset. The desired dataset size might be described through the polyface count, number of vertices, or file size per model. Subsetting a mesh will reduce the size of each individual file while maintaining the full resolution of the data, while decimating will reduce the data resolution but avoids splitting the data across several files.

[wptabs style=”wpui-alma” mode=”vertical”]

[wptabtitle] GETTING STARTED[/wptabtitle]

[wptabcontent] Subsetting Mesh Data

Subsetting a mesh in Rapidform is simple.

1. Enter the Mesh Mode by double clicking on the mesh you would like to subset in the left-hand menu tree.

2. From the top menu select the Split command.

clip_image002

Fig. 1: The Split tool[/wptabcontent]

[wptabtitle] THE SPLIT COMMAND[/wptabtitle]

[wptabcontent]3. In the Split command sub-menu select MethodBy User Defined Plane.

 

Select the base plane that will split your mesh nicely. Under Plane Options go to Base Plane.

clip_image004

Fig. 2: Select a base plane.[/wptabcontent]

[wptabtitle] MOVE AND ROTATE THE PLANE[/wptabtitle]

[wptabcontent]4. Move and Rotate the plane until it is in the proper position to split the mesh.

clip_image006

Fig. 3: Manipulate the base plane.[/wptabcontent]

[wptabtitle] CLICK CONTINUE ARROW[/wptabtitle]

[wptabcontent]5. Hit the continue arrow to split the mesh

clip_image008

Fig. 4: Select the region you wish to preserve. [/wptabcontent]

[wptabtitle] EXPORT[/wptabtitle] [wptabcontent]6. Select the Remaining Region which will appear highlighted in blue (or yellow if you hover over it with the mouse) and hit the check button to accept the results.

clip_image010

Fig. 5: The results of a split operation

7. Export the resulting section of the mesh as a new file to save the result of the process.[/wptabcontent]

[wptabtitle] DECIMATE IF NECESSARY[/wptabtitle]

[wptabcontent]III. Decimating a mesh

In addition to subsetting the mesh, you may want to decimate it to further reduce file size.

1. While in mesh editing mode select the Decimate tool from the top menu bar

clip_image012

Fig. 6: The Decimate Tool[/wptabcontent]

[wptabtitle] DECIMATING CONT.[/wptabtitle] [wptabcontent]2. In the Decimate tool menu, select Poly-Face Count and then set the Reduction Ratio or the Target Poly-Face Count. The level of reduction will obviously depend on your project’s requirements. For import into ArcGIS (see the workflow) a target poly-face count of fewer than 500,000 is recommended. Select Large Data Mode, Preserve Color and Do not Move Poly-Vertices, set the High Curvature Area Resolution toward Dense, and hit the check button.

clip_image014
Fig. 7: Decimate the Mesh
The options in this menu are explained fully in the Rapidform Contextual Help.

3. Export the decimated mesh to save it as a new file.[/wptabcontent][/wptabs]

Posted in Workflow | Tagged , , , , ,

Working with terrestrial scan or photogrammetrically derived meshes in ArcGIS.

[wptabs mode=”vertical”] [wptabtitle] Introduction[/wptabtitle] [wptabcontent]

Many archaeological projects use a GIS to manage their data. After terrestrial scan or photogrammetric modeling data has been collected and cleaned, it may be convenient to integrate it into a project’s GIS setup. As ArcGIS is widely available and in use both in University research departments and government offices, we’re using it for the example here, but something like this should work for other GIS packages.

The first part of the workflow addresses working with meshes created from terrestrial scan data, and assumes you have existing meshes in Rapidform.

[/wptabcontent]

[wptabtitle] Decimation[/wptabtitle] [wptabcontent]

Before exporting a dataset for use in a GIS you may want to decimate the dataset to produce a lower resolution model for visualization. High resolution models can slow rendering down and make manipulation of the model difficult.

a. Select the model you will be exporting either graphically or through the menu tree on the left hand side of the screen.

b. In the main menu select Tools and then Scan Tools and Decimate Meshes

clip_image002

Fig. 1: Select the Decimate Meshes tool

c. In the Decimate Meshes menu confirm the selection of the Target Mesh.

d. Under Method choose Poly-Face Count for best control over the size of the resultant model.

e. Under Options set the Target Poly-Face Count. Numbers under 100,000 will render relatively quickly in ArcGIS. Inclusion of more than 500,000 polyfaces is not recommended.

f. Under More Options select Preserve Color.

g. Click “OK” to confirm and decimate the mesh.

clip_image004

Fig. 2: Select options for decimating the mesh.

[/wptabcontent]

[wptabtitle] Subsetting and Splitting Meshes[/wptabtitle] [wptabcontent]

(Skipping ahead a bit conceptually…) When you import your mesh data into ArcGIS each mesh is stored as a single multipatch. You don’t want to edit the shape of the multipatch in ArcGIS, only the placement (trust us on this). So any subsetting of the mesh needs to be performed before exporting from Rapidform (or other modeling software of your choice). Why subset or split a mesh?

a. Navigating in tight, enclosed spaces. You might want to be able to turn off the visibility of the back wall of a room or one half of a cistern to better visualize its interior.

b. Major sections of a mesh. If you have a scan of a building including several rooms or structures and you want to be able to visualize them individually, then they need to be made into discrete meshes.

[/wptabcontent]

[wptabtitle] Exporting[/wptabtitle] [wptabcontent]

a. Select the model you want to export from the menu tree on the left hand side of the screen.

b. Right-click and select “Export”. Select an appropriate file format (see step 2, below, for choices).

clip_image006

Fig. 3: Export via the menu tree.

4. Export Formats

a. Get a list of valid export formats by looking in the dropdown menu of the export dialog box.

clip_image008

Fig. 4: Valid export file formats.

b. Suggested formats for export are VRML (file extension .wrl), collada (.dae) and AutoDesk 3d Max (.3ds).

[/wptabcontent][wptabtitle] Advice on Textures and Color Data[/wptabtitle] [wptabcontent]

Modeling software manages color data in several ways. Color data might be recorded as UV coordinates referencing a separate texture file, as per vertex, per face or per wedge color information. Color data imported with scan data will typically default to storage as per vertex color. ArcGIS only recognizes color data stored explicitly in texture files, so if your color data is currently stored in another form you need to convert it.

[/wptabcontent]

[wptabtitle] Textures direct from Rapidform[/wptabtitle] [wptabcontent]

i. Select the Mesh mode from the main toolbar.

clip_image010

ii. Select Tools and Texture Tools and Convert Color to Texture. clip_image012

Fig. 5: Conver Color to Texture

 

iii. After creating the texture, export the model as usual.

iv. Export the texture by going in the Main Menu to Texture Tools, then Export Texture to save the texture file. Store it in the same folder as the model.[/wptabcontent]

[wptabtitle] Color and Texture in Meshlab[/wptabtitle] [wptabcontent]Sometimes you want more tools for color editing. Sometimes ArcGIS doesn’t like the textures produced by Rapidform. For this reason, we suggest an alternative method for setting the texture data using Meshlab. Meshlab is open source, and can be found at meshlab.sourceforge.net.

i. From Rapidform export a .VRML file by right-clicking (in the model tree menu on the left land side of the screen) on the mesh you wish to export and selecting Export.

ii. In Meshlab, open a new empty project. Go to File and Import Mesh.

clip_image014

Fig. 6: Import the Mesh to Meshlab

iii. Select the VRML file you just created and hit Open.

iv. Transfer the color information from per vertex to per face. In the main menu go to Filters, then to Color Creation and Processing, then to Transfer Color: Vertex to Face. Hit Apply in the resulting pop-up menu.

clip_image016

Fig. 7: Transfer color data from the vertices to the faces of the mesh.

v. From the Main Menu go to Filters, then to Texture, then to Trivial Per-Triangle Parametrization.

clip_image018

Fig. 8: Create texture data.

In the pop-up menu, select 0 Quads per line, 1024 for the Texture Dimension, and 0 for Inter-Triangle border. Choose the Space Optimizing method. Click Apply.

n.b. If you get an error along the lines of “Inter-Triangle area is too much” your Texture Dimension is too small for the dataset. Increase the texture dimension to resolve the error.

clip_image020

Fig. 9: Set the texture data parameters.

vi. In the Main Menu go to Filters and Texture and Vertex Color to Texture. Accept the defaults for the name and size. Tick the boxes next to Assign texture and Fill Texture.

clip_image022

Fig. 10: Transfer color data to the texture dataset.

 

vii. In the Main Menu go to File and Export Mesh. Make sure to UNTICK the box next to Vertex Color. Otherwise ArcGIS gets confused! Make sure the texture file is present. Click OK to save.

clip_image024

Fig. 11: Export the mesh as collada (dae).[/wptabcontent]

[wptabtitle] Preparing a GIS to receive Mesh data[/wptabtitle] [wptabcontent]

Once you have created your mesh files and exported them to collada or something similar and explicitly assigned texture data (not to be confused with vertex color, face color or wedge color data), you are ready to import the data into ArcGIS. Assuming your data is not georeferenced, follow the method below. If your data is georeferenced, head over to our Photoscan to ArcGIS post, and follow the import method described there.

1. Preparing the geodatabase

a. Open ArcCatalog any way you choose. Create a new geodatabase by right clicking on the folder where you wish to create the geodatabase and selecting New and File Geodatabase. Only Geodatabases support the import of texture data, so don’t try and use a shapefile.

clip_image027

Fig. 14: Create a geodatabase in ArcGIS.

b. Create a multipatch feature class in the geodatabase.

c. Ensure that the X/Y domain covers the coordinates of any meshes you will be importing. View the Spatial Domain by right-clicking on the feature class and going to Properties and then to the Domain tab.

clip_image029

Fig. 15: Check the spatial domain of the new feature class.

d.If the spatial domain is not suitable, adjust the Environment settings by going to the Geoprocessing toolbar in the Main Menu. Scroll down to Geodatabase Advanced and adjust the Output XY Domain as needed. You can also adjust the Z Domain in this dialog box.

clip_image031

Fig. 16: Adjust the spatial domain in the environment settings.

[/wptabcontent]

[wptabtitle] Preparing the scene file. [/wptabtitle] [wptabcontent]

a. Open ArcScene and add base data such as a plan of the site, an air photo of the location, etc. The base data will allow you to control the location to which the model is imported. Add the empty multipatch feature class you just created.

clip_image033

Fig. 17: Add base data to a Scene.

b. Start editing either from the 3D editor toolbar or by right-clicking on the multipatch feature class in the Table of Contents and choosing Edit Features and Start Editing.

clip_image035

Fig. 18: Start editing in ArcScene. [/wptabcontent][wptabtitle] Importing the Scan data[/wptabtitle] [wptabcontent]

1. Import the vrml or collada file by selecting the Create Features Template for the multipatch and clicking on the base plan roughly in the location where you would like the mesh data to appear. Select the vrml or collada file from the Open File dialog box that appears. Wait while the file is converted.

clip_image037

Fig. 19: The vrml data is converted to multipatch on import.

2. You can now Move, Rotate, Scale the imported multipatch in ArcScene by selecting the feature using the Edit Placement tool and inputting values in the 3D Editing toolbar or by interactively dragging the multipatch feature.

clip_image039

Fig. 20: Select the multipatch feature to adjust its position and scale.

3. Once you are satisfied with the placement of the multipatch, you can add attribute data.

[/wptabcontent]

[wptabtitle] A note on rotation in Arcscene[/wptabtitle] [wptabcontent]

You can only rotate in the x-y plane (that is, around z-axis) in ArcScene. If you need to rotate your data around the x or y axis you need to do this in your modeling software before import. Bringing a .dxf of the polygon or point data you are trying to align the mesh with into your modeling software is probably the simplest way to get the alignment right. You may have to translate your .dxf to a local grid because most modeling software doesn’t like real world coordinates. Losing the real coordinates during this step doesn’t matter because you’re just using the polygon data to set orientation around the x and y axes. You’ll get the model in the correct real-world place when you import into ArcScene.

clip_image041

[/wptabcontent]

[wptabtitle] Re-exporting[/wptabtitle] [wptabcontent]

Fig. 21: The textured mesh data appears over the correct location on the base plan.

4. At this point it’s probably a good idea to re-export a collada model of your newly scaled and located mesh data. If not, every time you update the model you will have to go through the scaling and locating process again.

a. In ArcToolbox go to Conversion Tools> To Collada> Multipatch To Collada. clip_image043

Fig. 22: Export Multipatch to Collada

b. Select the multipatch for export and the folder where you want the re-exported model to appear.

clip_image045

Fig. 23: Set parameters for export.

c. Check that the model has exported correctly by opening it in your modeling software.

n.b. You may have to reapply the textures at this point.

[/wptabcontent]

[wptabtitle] A note on features for attribute management [/wptabtitle] [wptabcontent]

It may be convenient to store attribute information in other related feature classes so that a single meshed model can have multiple, spatially discrete attributes. How you design your geodatabase will vary greatly dependent on project requirements.

clip_image047

Fig. 19: Additional related feature classes can be used to manage attribute data.

[/wptabcontent]

[wptabtitle] A note on just how much mesh data you can get into ArcScene.[/wptabtitle] [wptabcontent]

1. If you are using a file geodatabase, in theory the size of the geodatabase is unlimited and you can include all the mesh data you want.

2. In practice, individual meshes with more than 200,000 polygons have problems importing on an average ™ desktop computer.

3. In practice, rendering becomes slow and jumpy with more than 200 MB of mesh data loaded into a single scene on an average ™ desktop computer. The size and quality of your textures will also have an impact here. Compressed textures are probably a good plan.

4. In short, the limitation is on rendering and on what can be cached in an individual scene, rather than on storage in the geodatabase. Consider strategies including having low polygon count meshes for display in a general scene, with links to high polygon count meshes, which can be stored in the geodatabase but not normally rendered in the scene, which can be called up via links in html popup, the attribute table, or via another script.

[/wptabcontent]

[/wptabs]

Posted in Workflow, Workflow, Workflow, Workflow | Tagged , , , , , ,

Basic Digitizing of archaeological features from scan data in Rapidform: Sections, Profiles, Plans and Elevations

This workflow will guide you through the process of digitizing archeological features from scan data using Rapidform.
Hint: You can click on any image to see a larger version.

General Considerations

Automatic feature recognition has improved greatly (and continues to improve) but isn’t quite up to most archaeological scenarios just yet. It follows that hand drawing, or ‘digitizing’, individual stones, patches of conservation materials, layers of mortar, etc. on scan data is part of many a project’s workflow. For information on drawing 3D vectors on scan data see Part 1 of the Rapidform GMV series.

For purposes of paper publications, sometimes it is convenient to produce a 2D rendering of a 3D object or feature. In archaeology sections, profiles, plans and elevation drawings are commonly used 2D renderings. The 3D vectors characterized from scan data can be converted into 2D drawings. When a 3D dataset is converted to a 2D plan drawing conventions (e.g. line styles, shadings, and hatchings) become essential for conveying information. Conversion is therefore a two-step process; first accurate 2D projections of the 3D vectors describing the feature or object are created, then appropriate drawing conventions are applied. The first stage is easily achieved in Rapidform. The second stage is better accomplished within a CAD, GIS or illustration program which provides finer control over the line artwork.

 

[wptabs style=”wpui-alma” mode=”vertical”]

[wptabtitle] OPEN A PROJECT[/wptabtitle]

[wptabcontent]Basic Workflow to create a 2D plan from existing 3D vectors in Rapidform

1. Open a project containing your scan data and 3D vectors.

clip_image002


Fig. 1: A Rapidform project with 3D vectors outlining individual stones in a wall.

[/wptabcontent]

 

[wptabtitle] CREATING A REFERENCE PLANE 1[/wptabtitle] [wptabcontent]

2. Create a reference plane where you want your 2D vectors to appear. Many archaeological features don’t have an obvious, neatly planar ‘top’ or ‘front’ face. Try to orient the reference plane parallel to the main orientation of the feature you are drawing. When choosing the orientation, you may want to consider placing perpendicular reference planes for the creation of a top plan or another elevation or section at the same time.

clip_image004

Fig. 2: This wall is substantially vertical but the front face, as seen from the path, is clearly not a flat plane.

[/wptabcontent]

[wptabtitle] CREATING A REFERENCE PLANE 2[/wptabtitle] [wptabcontent]

a. From the main menu, select “Insert” and from the dropdown menus select “Ref. Geometry” and then “Plane”.

clip_image006

Fig. 3: Select Insert Reference Plane.

b. In the Add Ref. Plane menu a number of options are available for creating the plane. For the creation of an initial plane oriented in line with a set of features, the “Extract” option may be the simplest choice.

clip_image008

Fig. 4: Select a method for defining the reference plane.

[/wptabcontent]

[wptabtitle] CREATING A REFERENCE PLANE 3[/wptabtitle] [wptabcontent]

c. Select the entities you wish to include in the plan by left-clicking and sweeping the mouse across the entities.

Hint: Remember that the middle button on the mouse toggles between navigation and selection. clip_image010

clip_image012

Fig. 5: Select options within the Add. Ref Plane menu.

d. In the Add Ref. Plane menu under Fitting Options set the Fitting Type to “Max. Bound” to create a plane surrounding all the selected entities.

e. Under Constraint Options select Axis Constraint and Use Specified Axis As Initial Guess. Then Select User Defined and choose the axis parallel to the long side of the reference plane you are creating. In the example in Fig. 5 this is the x-axis.[/wptabcontent]

[wptabtitle] CREATING A REFERENCE PLANE 4[/wptabtitle] [wptabcontent]

f. Click “OK” to create the Reference Plane.

clip_image014

Fig. 6: Click “OK” to accept the settings and create the Reference Plane.

[/wptabcontent]

[wptabtitle] EXPAND REFERENCE PLANE IF NECESSARY[/wptabtitle] [wptabcontent]

3. (optional) If the resulting Reference Plane does not entirely enclose the entities, you likely need to expand it.

clip_image016

Fig. 7: The Reference Plane does not entirely enclose the entities.

a. In the Add Ref. Plane menu select the Convert option for defining a plane. Then select the Reference Plane you just created. Click “OK” and a larger plane will be defined.

clip_image018

Fig. 8: The final reference plane should fully enclose all the features you are including in the plan.[/wptabcontent]

[wptabtitle] CREATE ORTHOGONAL REFERENCE PLANE 1[/wptabtitle]

[wptabcontent]

4. (optional) Create Orthogonal Reference planes for further plans.

a. In the Add Ref. Plane menu select the Rotation option. Select the Plane you just created. This is the object which will be rotated. Select an entity, such as a polyline as the axis of rotation. Set the rotational Angle to 90 degrees. Click “OK” to create a new reference plane.

Hint: It may be useful to sketch a rectangle around the Reference Plane to create straight polylines for use as rotational axes.

clip_image020

Fig. 9: The Rotational method of creating a Reference Plane.

[/wptabcontent]

[wptabtitle] CREATE ORTHOGONAL REFERENCE PLANE 2[/wptabtitle] [wptabcontent]

b. Enter Sketch mode by selecting “Sketch” from the main toolbar.

clip_image022

Fig. 10: The Sketch button on the Main menu.

c. In the Sketch menu select the Rectangle tool. Select the Reference Plane just created as the Base Plane.

d. Draw a Rectangle over the area where you want to create the next Reference Plane. Click “OK” to accept the rectangle.

clip_image024

Fig. 11: The Rectangle tool in the Sketch toolbar.

[/wptabcontent]

[wptabtitle] CREATE ORTHOGONAL REFERENCE PLANE 3[/wptabtitle] [wptabcontent]

e. The rectangle should be nicely perpendicular to the first Reference Plane you created.

clip_image026

Fig. 12: The rectangle is perpendicular to the first Reference Plane.

f. Return to the Add Ref. Plane menu and select the Convert option. Select the Rectangle just created and click “OK” to create the Reference Plane.

Hint: Delete the intermediate planes after you have created the final Reference Planes.

[/wptabcontent]

[wptabtitle] PROJECT 3D SKETCHES ONTO PLANES[/wptabtitle] [wptabcontent]

5. Once the Reference Planes are prepared you can project 3D sketches onto those planes.

a. From the Main Menu enter Sketch mode.

b. Set the Base Plane to the Reference Plane on which you wish to project features.

c. Select the Convert Entities button from the Sketch Menu.

clip_image028

Fig. 13: The Convert Entities button.

d. Select the entities you wish to project onto the reference plane by holding down the left mouse button and sweeping across them. Click “OK” to convert the entities from 3D to a 2D projection.

[/wptabcontent]

[wptabtitle] PROJECT 3D SKETCHES – CONT.[/wptabtitle] [wptabcontent]

e. The features you selected should now appear as a 2D sketch listed in the Sketches menu on the left side of the screen.

clip_image030

Fig. 14: The 2D projection of the 3D vectors appears on the Reference Plane and in the Sketches list.
[/wptabcontent]

[wptabtitle] EXPORT[/wptabtitle] [wptabcontent]

6. The 2D sketch can now be exported for annotation and manipulation in a CAD, GIS or Illustration program.

a. Right-click on the sketch in the Sketch list and select Export. Select the output format, .dxf is probably a good option.

clip_image032

Fig. 15: Export the Sketch.

b. Applying drawing conventions is not covered in the Rapidform tutorials. Be aware of regional (and even project to project!) variations in drawing conventions. We refer you to Approaches to Archaeological Illustration: A Handbook (Ed. Steiner, M., 2005, Council for British Archaeology and Association of Archaeological Illustrators and Surveyors).

[/wptabcontent] [/wptabs]

Posted in Workflow, Workflow | Tagged , , , ,

Rapidform XOR3 Interface Basics

Rapidform, offered by Inus Technology, Inc., is 3D scan data processing package ideal for product redesign, inspection, and reverse engineering applications. Rapidform offers three primary packages for data processing including XOS (Scan), XOR(Redesign) , and XOV(Verifier). This document discusses the basic interface and operations in XOR (XOS and XOV are very similar).

Mouse Button Functionality

The standard mouse button functionality for Rapidform is as follows:

Left: Rotate
Middle:

Right:
Rotate

Ctrl + Left or Right: Pan
Shift + Left or Right:
Zoom

Zooming Center: Screen Center

Rapidform also offers six other software profiles for navigating including profiles for Solidworks, UGS/NX, Polyworks, Geomagic, CATIA, and ProE Wildfire. Users coming from one of these applications, can choose the application profile that they are most used to working with. The user can also adjust the zooming center for scrolling to be either screen center or mouse position.

Note: If you cannot navigate around in a scene, chances are you are in Selection mode. To toggle between navigation mode (indicated by the two rotating arrows icon) and selection mode (indicated by the crosshairs icon), tap the middle mouse button. This is a key functionality for viewing data in XOR.

Global Vs. Local Mode

By default when you open Rapidform, the scene is automatically in global mode. When in global mode, you cannot directly edit or create a new dataset. To edit a dataset, select the dataset file name in the table of contents (TOC) and then click on the button that indicates the type of dataset you are working with in the menu bar across the top of the screen. For example, if you are working with a mesh, select the file name in the TOC and then select the Mesh button clip_image002 in the menu bar. You are now in local mesh mode and are presented with numerous mesh editing functions in the new menu bar. Notice when you change from global model to local mode, the primary tool bar changes to include the editing functions specific to the dataset that you are working with.

clip_image004

clip_image006

Figure 1 (Top): Global Mode Toolbar (Bottom) Local mode for Mesh Datasets that displays available mesh editing tools

Once in local mode, you can perform any number of edits to your dataset. For example, with a mesh dataset you can perform color correction, hole-filling, smoothing and a host of other mesh editing operations. When you are done with an editing session, it is important to close the editing session by either clicking on the Mesh button clip_image007 again or by selecting the ‘check’ at the bottom right of the screen.

clip_image009

Figure 2: Click either the Mesh button or the ‘check’ to exit local mode

Once you exit local mode, be sure and save your dataset to save the recent changes.

Important Note: You cannot save your project when you are in a local editing session. When you do, you will receive the following warning

clip_image011

If you click ‘OK’, all changes/edits will be lost and the scene will automatically return to global mode. Boo! Instead if you prefer to keep the recent edits, click cancel, exit local mode by using the methods described above and then save your project.

Selection Tools

To be able to select a portion of a dataset in XOR, the user has to be in local mode for the data type that he/she is working with. To enter local mode for a mesh dataset for example, select the mesh dataset in the table of contents, and then select the Mesh button at the top left of the screen. Once in local mode, the user taps the middle mouse button once to enter selection mode. Once in selection mode, the user can perform a sub-selection using one of the different tools shown below.

clip_image013

Figure 3 (Left) Selection tools and (Right) Selection filters available in XOR

The freeform tool gives the user the most control over their selection. The paint brush tool is particularly nice for interactive selection and the flood fill selection is great for selecting areas with similar characteristics. Also to append a selection or deselect, the user simply holds the Shift or Ctrl keys respectively. In addition, XOR offers a number of selection filters that allow the user to specify selection for certain data types (i.e. mesh faces, vertices, surface bodies, etc..).

Scene Display Options

The primary scene display options in Rapidform are accessed by selecting the Display tab just under the table of contents at the bottom left of the screen.

 clip_image016

Figure 4: Location of Display tab in Rapidform

In the display settings, users are presented with numerous options for controlling the display of objects in a scene. An image with descriptions of the primary display settings are shown below. Within the display settings, the user can control the number of lights and lighting direction as well as set up a view clipping plane if desired. It is also worthwhile to explore the different mesh shading methods that are available and determine which is the optimum for viewing your mesh data. Toggling image texture off/on also reveals a lot about the surface topography of an object. Take a moment and familiarize yourself with the different display options.

clip_image018

Figure 5: Display tab settings

Note: The mesh display options as well as scene lighting and clipping plane options are also found under the main View menu.

You should now be familiar with the basics of the Rapidform XOR interface. Open your dataset and get started!

Posted in Workflow, Workflow | Tagged , , ,

test prezi

[prezi width=600 height=400 id=’http://prezi.com/q7jk84mui5ft/gmv-introduction-draft/’]

 

[iframe src=”https://ahead.com/#view/lilaspaces/lilaadmin/au160pq7Tpe70Sp3VHhmkp?scene=All” width=”600″ height=”400″ scrolling=no frameBorder=”1″ style=”border:4px solid #333333;border-bottom-style:none”]

Posted in Uncategorized

Rapidform: Basic Digitizing of archaeological features from scan data; Annotated 3D models

These tutorials will show you how to digitize archaeological features from terrestrial scan data.
Hint: You can click on any image to see a larger version.

[wptabs style=”wpui-alma” mode=”vertical”] [wptabtitle] GENERAL CONSIDERATIONS[/wptabtitle] [wptabcontent]

Before beginning a digitizing project in Rapidform consider what final products are required. Archaeological projects might require…

[/wptabcontent]

[wptabtitle] A 2D TECHNICAL ELEVATION OR PLAN[/wptabtitle] [wptabcontent]A 2D technical elevation or plan.

Technical elevations and plans utilize standardized conventions to represent 3D objects in 2D and include both metrical information and annotations. The multi-phase plan familiar from excavation is a classic example of technical drawing in archaeology.

clip_image002

Fig. 1: A 2D site plan (Image credit: Dobie and Evans 2010, p. 59, fig. 86)[/wptabcontent]

[wptabtitle] AN INTERPRETIVE DRAWING[/wptabtitle] [wptabcontent]An interpretive drawing.

An interpretive drawing is a representation of what the object, feature or structure looks like, emphasizing the salient aspects of the object which make it recognizable rather than metrics.

SF_74_ill SU_2043_ill_2

Fig. 2: An interpretive drawing of an inscription (top) and a vessel (bottom) from Gabii, Italy. (Image Credit: Katie Huntley, Gabii Project)[/wptabcontent]

[wptabtitle] AN ANNOTATED 3D MODEL[/wptabtitle] [wptabcontent]An annotated 3D model.

Rather than creating a simplified vectorized model of 3D scan data, annotations and measurements are placed directly onto the scan data. The scan data is cleaned but remains undifferentiated in either a point cloud or mesh format.

clip_image006

Fig. 3: 3d model of vessel from scan data, with user annotations added. (Image Credit: CAST, Virtual Hampson Museum, http://hampson.cast.uark.edu/)

Annotations on scan data can indicate interpretations, like the presence of abrasions, or measurements, or vector overlays on salient features.[/wptabcontent]

[wptabtitle] AN ANNOTATED 3D INTERPRETIVE MODEL[/wptabtitle] [wptabcontent]An annotated 3D interpretive model.

An interpretive 3D model is analogous to the 2D technical plan. It uses breaklines and solids to represent the form of the object and can include annotations and metrical information. Normally it will not include the original scan or mesh data.

clip_image008

Fig. 4: A solid model generated from scan data with individual features such as planks rendered as discrete objects. (Image credit: Traditional boats of Ireland Project, http://tradboats.ie/project/)[/wptabcontent]

[wptabtitle] THE DIGITIZING WORKFLOW[/wptabtitle] [wptabcontent]The digitizing workflow for each of these final products will be slightly different. This workflow covers the steps for the creation of an annotated 3D model.
[/wptabcontent]
[wptabtitle] CONTINUE TO…[/wptabtitle] [wptabcontent]Continue to step one of a digitizing project.[/wptabcontent] [/wptabs]

Posted in Workflow, Workflow | Tagged , , , , ,

Creating a Polygonal Mesh using IMMerge – Polyworks V11

This series will show you advanced modeling building modeling techniques using Leica’s Cyclone.
Hint: You can click on any image to see a larger version.

As objects become more complex, using layers (Shift + L) becomes essential to organizing and controlling the model space.

Once a point cloud data set has been properly aligned with overlap reduction, it is ready for meshing. IMMerge is the module in Polyworks that creates a mesh from an IMAlign project. Note – Only IMAlign projects can be meshed. See below for specifics for IMMerge operations.

[wptabs style=”wpui-alma” mode=”vertical”] [wptabtitle] BASIC SETTINGS[/wptabtitle] [wptabcontent]

In the Polyworks Workspace Manager, select the IMAlign project that you want to mesh and then select Create a Polygonal Model in the Workspace Manager.

IMMerge – Basic Settings

Values Carried over from IMAlign Project (usually do not change)

Max Distance: The maximum distance between two overlapping scans
Surface Sampling Step: Average of interpolation step across scans; the resulting mesh
density
Standard Deviation:
Approximate alignment error (In meters for Optech)

Smoothing Level: Can be modified. It is generally recommended to have Low to Med level of
smoothing.

[/wptabcontent]

[wptabtitle] ADVANCED SETTINGS[/wptabtitle] [wptabcontent]

Advanced Settings Explained – (usually do not change)

Reduction Tolerance:
– Compresses the mesh by reducing the number of triangles (without loosing definition)
– Typical Reduction Tolerance = 1/5 * max standard deviation

Smoothing Radius
– Radius of the spherical filter used to smooth the resultant mesh.
– The greater the smoothing radius, the more the mesh is smoothed.
– Typical Smoothing Radius: 2-4 times the surface sampling step

Smoothing Tolerance
Typical Smoothing Tolerance: 3 x’s max standard deviation

Common Error returned from IMMerge processing of very large datasets
Error 1413:
Block Size too small or Out of memory

– Actually not a function of block size, so DO NOT increase block size. Recommended block size and compaction is 200 and 20, respectively. Instead, change the subdivision settings. Change the “# of triangles per job” to “# of Merging jobs” and where it says default, set a value. Start at 1000 then double if the operation still does not merge. If it does not merge in 10000 jobs or less, then the dataset is probably too large.

– If the dataset will not merge, split the IMAlign project into two pieces and mesh the two pieces separately.

[/wptabcontent] [/wptabs]

Posted in Workflow | Tagged , , , ,

Importing Optech Data into IMAlign – Polyworks V11

This workflow will show you how to import Optech data into IMAlign, part of Polyworks V11
Hint: You can click on any image to see a larger version.

As objects become more complex, using layers (Shift + L) becomes essential to organizing and controlling the model space.

[wptabs style=”wpui-alma” mode=”vertical”] [wptabtitle] IMAlign INTIAL SETUP[/wptabtitle] [wptabcontent]clip_image002[4]

Figure 1: Set IMAlign options as suggested below prior to importing any data

IMAlign is the Polyworks module that allows you to import and align scan data. When you first open IMAlign, it’s good to define a couple of parameters. Before you import any data, open IMAlign options and change the Digitizer from Generic Close Range to Optech ILRIS 3D and change the working units to meters. Also, turn off the Interactive Mode Wizard and the Unknown Units Wizard. Also, under the Alignment section change Subsampling from 1/4 to 1/1 and under the Images section set Max Angle to 85 (see illustration below under Section 4). Next, go to Tools – Save Configuration (to save the new settings).[/wptabcontent]

[wptabtitle] DISPLAY PARAMETERS[/wptabtitle] [wptabcontent]clip_image004[4]

Figure 2: The eye icon controls how objects are displayed in the scene

Object color Mode Point or Default
Default Static Display
Drawing Type: Smooth or Flat
Subsampling: 1/1

Default Dynamic Display (Recommended parameters bolded/italicized)
Drawing Type: Point
Subsampling: 1/16 or 1/64[/wptabcontent]

[wptabtitle] IMPORTING OPTECH DATA[/wptabtitle] [wptabcontent]

Importing Optech Data (.pf format)

1. Go to File – Import Images – 3D Digitized Data Sets. Under Spherical Grids, choose Optech PIF. Next, navigate to the file, and select Open. Typically, you will not subsample the data set (choose 1/1), however if file size and/or memory become an issue, the only place to subsample or decimate a data set in IMAlign is upon import.

The following is a basic description of the import parameters and additional guidance for modifying parameters.

Note: Always hit the Reset button before adjusting any import parameters. Previous import settings are retained.

When Optech data is imported into IMAlign, the data is automatically “gridded” or meshed. For spherical datasets like Optech data, IMAlign does this by subdividing the data by angle and range and then assigning a plane to each data section and then meshing that individual section. Therefore subdividing Optech data by both angle and range (or distance) is recommended. The smaller the angle, the more subscans created. A Subdivision Angle of 20 is generally recommended, however you might want to lower this value to 10 if there is a lot of ground/floor data in a scan.

Figure 3: Interface for importing Optech data into IMAlign[/wptabcontent]

[wptabtitle] INTERPOLATION OPTIONS – FOCUS[/wptabtitle] [wptabcontent]Interpolation Options

Focus Distance: The minimum distance, or the point at which data is first recorded. – Typically leave as is.
Step at Focus: The data resolution or point spacing at the focus distance (in meters)
Max Angle:
The angle from normal for which values are accepted for a scan.
A value of 75 – 85 is good for the Optech.

Figure 4: Graphic showing import parameters including the focus distance, number of ranges, and the minimum and maximum distances.[/wptabcontent]

[wptabtitle] INTERPOLATION OPTIONS – MAX EDGE LENGTH[/wptabtitle] [wptabcontent]Max Edge Length (Important): The distance at which points are connected or meshed. Typically 10 x’s or lessthe step or point spacing. A value that is too large can erroneously connect points that should not be connected (Figure 5, left) . Again, this value should be 10’s the Step or less. Decreasing will result in less data being imported (Figure 5, Right).

Figure 5: Example of different max edge length values and the effect of these balues on data import[/wptabcontent]

[wptabtitle] INTERPOLATION OPTIONS – NUMBER OF RANGES[/wptabtitle] [wptabcontent]Number of Ranges: Self-Explanatory. Adjust as needed. Distance between ranges is based on the focus distance.
Min Step: The smallest acceptable step value (typically do not change).

Focus Distance/Ranges Interpolation Step MEL
(FD) 10 meters .01 meter .1 (or less)
(Rng 1) 20 meters .02 meters .2
(Rng 2) 40 meters .04 meters .4
(Rng 3) 80 meters .08 meters .8

For each range extending out from the scanner, the interpolation step and max edge length double.

Remember to always hit the Reset button when importing a new scan. Most of the time, the default import values work well. Occasionally, the max angle or max edge length will need to be adjusted. Once imported, a scan is now comprised as a collection or group of sub scans or images. Note the scans now have “surface information” and are no longer point clouds.[/wptabcontent]

[wptabtitle] CONTINUE TO…[/wptabtitle] [wptabcontent]

Continue to Registering Scans in Polyworks V11 IMAlign

[/wptabcontent] [/wptabs]

Posted in Workflow | Tagged ,

Registering (aligning) Scans in Polyworks V11 IMAlign

This document covers scan registration (alignment) and overlap reduction in Polyworks IMAlign.
Hint: You can click on any image to see a larger version.

As objects become more complex, using layers (Shift + L) becomes essential to organizing and controlling the model space.

[wptabs style=”wpui-alma” mode=”vertical”] [wptabtitle] REGISTRATION – 1[/wptabtitle] [wptabcontent]

1. Import all scans into IMAlign. Lock (Ctrl + Shft + L or right click – Edit – Lock) the first scan in the table of contents (TOC).
Note: To hide a scan’s digitizer position, hit the plus next to the scan name in the table of contents, plus next to reference points, then middle click (to hide) the Digitizer Position.

2. Next select a scan in the TOC to align to the first scan and select the Split View Alignment iconclip_image002to enter Local Mode.

Note: The two scans have to have areas of overlap in order to be able to align them together. Also, you can only enter Split View Mode when the unlocked scan is selected.

3. Rotate both views so that they are oriented correctly with respect to one another (see Figure 1 below). This is important for identifying common points between the two scans.

[/wptabcontent]

[wptabtitle] REGISTRATION – 2[/wptabtitle] [wptabcontent]

4. Select N Point Pairs Alignment clip_image004.

5. Match at least 3 pairs of corresponding points between the two views (no more than 5 or so is necessary). Points should be across the entire scan (not concentrated in one area) and multi-planar. Right click when finished.

Figure 1: Identifying common tie points between two scans

6. Next, run a best fit alignment clip_image008 between the scans. This is an iterative alignment that fine tunes the NtoN alignment. Subsampling should be 1/1; optionally you can choose to Show slipping color map. In the statistics tab, the Standard Deviation reflects the accuracy of the alignment. Ideally (though it doesn’t always happen) you want the standard deviation to be less than the accuracy of the scanner. For the Optech, this is approximately 1 centimeter (.01). For the Minolta VIVID 9i, this varies based on the lense that you use (tele = 200 microns (.2mm), mid = 400 microns (.4mm). If the best fit alignment is taking too long to run but the alignment is close enough (the standard deviation is less than or equal to the accuracy of the digitizer and the Convergence value and mean are very close to 0), then the alignment can be stopped. This accepts the best fit alignment computed up to that point.

[/wptabcontent]

[wptabtitle] REGISTRATION – 3[/wptabtitle] [wptabcontent]

7. Once the best fit alignment is complete, lock the second scan. Select the next scan to align and repeat Steps 2-6.

8. Once all of the data has been aligned, compute one final Global or Best Fit Alignment. This adjusts the alignment across the entire dataset. To do this, select a single scan that has good coverage across the entire dataset and lock it then unlock all remaining scans. Run the Best Fit Alignment one final time. Once all alignments have been completed, save your IMAlign Project. A recommended naming convention is project_name_GR (GR = Global Registration.

TIP: Scans can also be grouped together allowing you to hide, select and perform numerous operations on a group of scans at the same time. [/wptabcontent]

[wptabtitle] OVERLAP REDUCTION – 1[/wptabtitle] [wptabcontent]

Overlap reduction is performed to remove the overlapping or redundant data between scans. This is recommended prior to meshing and/or to reduce the size of the dataset. Make sure and save a copy of your dataset prior to overlap reduction. Next, select Image – Reduce Overlap.

The Overlap Reduction parameters are discussed more below:

Max Distance: Determined by scanner
Max # of overlapping images: Recommended 5
Number of overlap layers: Recommended 3
Best Data (Std) –Default (when all scans are the same resolution) – typically use this option
Best Data (Extreme) – Used when you have scans of multiple resolutions.
This ensures that high resolution data will not be replaced by overlapping
lower resolution data. Select all high resolution scans in TOC and then run operation.
Suggested Setting: Best Data Standard

[/wptabcontent]

[wptabtitle] OVERLAP REDUCTION – 2[/wptabtitle] [wptabcontent]

Data Quality:
Viewing Angle: Accepts points that are orthogonal to the scan direction (directly in front of the scanner) rather than those that are more oblique.

Point-to-Digitizer Distance: Accepts points that are closer to the scanner rather than those that are further away.
Suggested Setting: Viewing Angle

Sometimes manual overlap reduction is required in addition to the automatic reduction in order to visually clean-up the data. The best way to go about this is by systematically selecting using the Select scan on Screen Tool clip_image012and toggling scans (using the Hide and Keep commands) and deleting the redundant data. This is done primarily for visual purposes.

Once overlap reduction is completed on your project, save your project with a different file name. A recommended naming convention is project_name_GRE (GRE = Global Registration with additional edits/overlap reduction). It is best to save projects with different names at key processing steps, in case one wants to return to a specific processing point.

For more on meshing, see Meshing using Polyworks IMMerge V11.

[/wptabcontent]

[wptabtitle] ALIGNING MULTIPLE IMAlign PROJECTS[/wptabtitle] [wptabcontent]

Aligning Multiple IMAlign Projects

 

Multiple IMAlign projects can also be aligned to one another. This allows for users to work on different sections of the site and then merge those sections together into one project. A global alignment and overlap reduction should be completed on each individual project before the projects are brought together. When an IMAlign project is imported to an existing project, the screen goes straight into a split view alignment. Align the 2 datasets using N Point pairs and then compute a Best Fit Alignment. Repeat for multiple projects. Multiple datasets can also be aligned in IMInspect if necessary.

[/wptabcontent] [/wptabs]

Posted in Workflow | Tagged ,

University of Arkansas – Vol Walker Building – Interior Floor 2

 

Working with the University of Arkansas’ Facilities Management and Planning Departments, CAST is documenting the historical Vol Walker Building and its renovation.  Here are merged scans of the second floor of the interior, which were collected with the Z+F 5005i Scanner.  The project includes multiple floors within the building interior as well as the building exterior.  Interior scans were collected with a point spacing that ranged from less than a centimeter at the most dense (at a range of < 1 meter) to approximately 5 cm at the least dense (at a range of 25 meters).  These scans were then reduced to a more consistent point spacing of 1 cm for potential future use in historical preservation documentation.  Exterior scans were collected with a point spacing of approximately 5-10 cm.  The data sets have been separated due to file size and data density.

Vol_Walker_Building_Interior_Floor_2 .zip (2.81 gb) (1 cm spacing in .pts file format)

Sitemap_Vol_Walker_Floor_2.htm -Explore the data set in Leica TruView, which requires Leica TruView free viewer and Internet Explorer.  For instructions on using the free TruView data viewer and for a complete list of links to the TruView data related to this project, please see: Accessing Vol Walker Interior TruViews.

Please note. This data is distributed under a Creative Commons 3.0 License  (seehttp://creativecommons.org/licenses/by-nc/3.0/ for the full license). You are free to share and remix these data under the condition that you include attribution as provided here.  You may not use the data or products in a commercial purpose without additional approvals. Please attach the following credit to all data and products developed there from:

Credits:

Data was collected in collaboration with University of Arkansas Facilities Management, Operations and Maintenance and Campus Planning Divisions with outstanding assistance from Bob Harris, Construction Coordinator.

Posted in United States, University of Arkansas, Fayetteville scanning | Tagged , , , , , , , , , , , , , , ,

University of Arkansas – Vol Walker Building – Interior Floor 1

 

Working with the University of Arkansas’ Facilities Management and Planning Departments, CAST is documenting the historical Vol Walker Building and its renovation.  Here are merged scans of the first floor of the interior, which were collected with the Z+F 5005i Scanner.  The project includes multiple floors within the building interior as well as the building exterior.  Interior scans were collected with a point spacing that ranged from less than a centimeter at the most dense (at a range of < 1 meter) to approximately 5 cm at the least dense (at a range of 25 meters).  These scans were then reduced to a more consistent point spacing of 1 cm for potential future use in historical preservation documentation.  Exterior scans were collected with a point spacing of approximately 5-10 cm.  The data sets have been separated due to file size and data density.

 

 

 

 

 

Vol_Walker_Building_Interior_Floor_1 .zip (2 gb) (1 cm spacing in .pts file format)

Sitemap_Vol_Walker_Floor_1.htm -Explore the data set in Leica TruView, which requires Leica TruView free viewer and Internet Explorer.  For instructions on using the free TruView data viewer and for a complete list of links to the TruView data related to this project, please see: Accessing Vol Walker Interior TruViews.

Please note. This data is distributed under a Creative Commons 3.0 License  (seehttp://creativecommons.org/licenses/by-nc/3.0/ for the full license). You are free to share and remix these data under the condition that you include attribution as provided here.  You may not use the data or products in a commercial purpose without additional approvals. Please attach the following credit to all data and products developed there from:

Credits:

Data was collected in collaboration with University of Arkansas Facilities Management, Operations and Maintenance and Campus Planning Divisions with outstanding assistance from Bob Harris, Construction Coordinator.

Posted in United States, University of Arkansas, Fayetteville scanning | Tagged , , , , , , , , , , , , , , ,

Z+F Laser Scanner Workflow: Setting up the Scanner

This workflow will show you how to setup the Z+F Laser Scanner prior to beginning your scan.
Hint: You can click on any image to see a larger version.

[wptabs style=”wpui-alma” mode=”vertical”]
[wptabtitle] INSTRUMENT’S COMPONENTS[/wptabtitle]

[wptabcontent]

Instrument’s Components

 

[/wptabcontent]

[wptabtitle] PRE-SCANNING CHECK[/wptabtitle] [wptabcontent]

Pre-scanning Check

1. Do not scan in rain, snow or fog

2. Protect scanner from excess moisture and rain

3. If temperatures are outside the calibrated range, an error message will display. Measuring accuracy cannot be specified if scanning proceeds

4. If the scanner is taken from a cold environment into a warm, humid one, the glass window and even the optic can fog up causing measurement error as does dust, and fingerprints

 

[/wptabcontent]

[wptabtitle] Z+F SCANNER SETUP[/wptabtitle]

[wptabcontent]

Z+F Scanner Setup

1. Set up tripod as stable and level as possible

*Never use the scanner without the tripod

2. Remove instrument from case with both hands, grasping the top with one hand on the handle and reaching under the base with the other in the space provided. Refrain from using the tribrach (base mount) to lift with as it can turn unexpectedly

3. Place instrument on tripod and secure

Always keep one hand on the handle until the scanner is firmly attached to the tripod

4. Level using the leveling screws and built-in bubble level (you will also check the level under the Tilt menu on the instrument once it is on)

5. Power Supplies: A) Li-Ion Battery Pack B) KNL-24 when working near a electric outlet

6. Check to verify that the lens is perfectly clean

 

[/wptabcontent]

[wptabtitle] M-CAM SETUP[/wptabtitle] [wptabcontent]

M-Cam Setup

1. Mount the M-Cam to the top of the scanner, upright on the same side of the scanner as the keypad and display

2. Align the two adjustment pins on the scanner with the corresponding holes in the camera mount base and fasten to the screw thread in between the pins

3. Remove the port cover that reads “Press”

4. Connect the two USB cables from the M-Cam to the USB ports. There is no particular order

5. Connect the Lemo-Plug to the right hand 7 pin connector port (it is marked with an “M”)

6. Remove the lens cover

7. Place the gray card on the magnetic on the corresponding magnetic plate on the scanner below the lens. The red dot should be nearest to the edge in the view of the camera

8. After each scan, the M-Cam will collect three sets/rotations of pictures in the vertical direction starting with the lowest rotation first.

 

[/wptabcontent]

[wptabtitle]CONTINUE TO…[/wptabtitle]
[wptabcontent] Continue to part 2 in the series, Z+F Laser Scanner: Starting Your Scan.
[/wptabcontent]
[/wptabs]

Posted in Hardware, Scanning, Setup Operations, Workflows, Z+F 5006i | Tagged , , , , , , , , ,

UPDATE – Guidelines for preparing workflows/documents for GMV

Options for posting to GMV

1. Creating and publishing a blogpost directly out of Word

2. Publishing from Windows Live Writer (strongly recommended)

Guidelines for creating your workflows in Word

1. Make all images “In line with text”

2. Make all image/table captions text. Do not use text boxes or MS Word captions – these are translated as images when published to GMV.

3. Group side-by-side images (as shown below).

clip_image002[4]

Figure 1: Sample group image and also sample figure text


Sometimes this can be difficult to do in Word . So select an image and copy – open Photoshop – Create New – Increase your canvas width roughly 2 x’s. Paste the first image then copy and paste the second image out of Word. Select both images in PS and group them (Ctrl + G). Select the image group and copy and paste back into Word.

4. Always hit Enter between an image and its caption (not Shift+Enter)

5. Some text formatting translates across to LiveWriter but not all. For example using preset formatting such as Heading 1 – Heading 3 will retain the size of the font but not the blue color. Also, the reduced font size above for Figure 1 is not retained. So you can format your text a bit in Word but be aware that you may have to reformat in LiveWriter. See screen shot below of copy and paste from THIS document into LiveWriter.

clip_image004[4]

Setting up LiveWriter

LiveWriter is part of Windows Live Essentials. It’s free do download and install for windows machines. When installing Live essentials, the default is to install not only LiveWriter but also a slew of other software. These can be unchecked and not installed if you choose to.

1. Once you have LiveWriter installed, you’ll need to set it up to publish to the GMV. Select – Add Blog Account from the main toolbar. For blog service – choose WordPress. Next enter https://gmv.cast.uark.edu/ for the blog web address and then enter your username and password for GMV (contact Snow if you need a username/password). Hit finish and you should now be connected.

2. By default, LiveWriter resizes images and makes them clickable to see the larger image version. I personally like my images to remain the same size as they are in the original word document. To format images in LiveWriter, first copy and paste an image from a Word doc into LiveWriter and then double click the image to access the Picture Tools. In Picture Tools, change the following 3 options:

a. Change the Picture Size from Medium to Original

b. Change Link to: property from Source Picture to None.

c. Next select Set to default to apply these settings to all incoming images.

clip_image006[4]

You are now ready to copy and paste a workflow from a Word doc. Be aware that not everything will translate beautifully from Word to LiveWriter and from LiveWriter to GMV. Sooo…

1. Create your workflow in Word using the guidelines above

2. Copy and Paste your workflow from Word to LiveWriter – adjust formatting as necessary

3. Publish your workflow from LiveWriter to GMV

4. View post in GMV and adjust formatting as necessary

5. Once you get everything formatted in GMV, download the article as a PDF and make sure all formatting is acceptable – if not, adjust the formatting using the WYSIWYG editor within GMV.

clip_image008[4]

Known Oddities, Wierdnesses, and Additional suggestions

1. Important: Always use Enter as opposed to Shift+Enter (particularly for placing image captions below an image, creating numbered lists, etc…). This may not look the best in the actual blog post but will improve the appearance of the PDF.

2. If you publish directly out of Word (which has not been covered in this document), a list numbered 1-x will often be restarted following the insertion of an image. If you decide to publish out of Word, you will spend a lot of time directly editing the HTML version of a post to fix this. Check out the Laser Control Workflow for Color Mapping post to see an example of this.

3. Tables are a little iffy. They typically work better when published directly out of Word. For the scan metadata forms I either published them out of word or converted the tables to images first. If anyone finds a better solution, please add it here.

Posted in Admin Articles

Tiwanaku, Bolivia – Digital Elevation Model (1972)

The Center has been involved in a multi-year project in collaboration with Dr. Alexei Vranich at the University of Pennsylvania to scan and document the Pre-Incan site of Tiwanaku, Bolivia.  Read a short synopsis of the project at Tiwanaku Project Details and for full details on the entire survey, refer to Geophysics and Geomatics at Tiwanaku.

For the .jpg, .tif, or .img photogrammetry formats, we recommend the free viewer ArcGIS Explorer Desktop. This free GIS application provides ways to explore and share GIS data.

digital elevation model DEM made using photogrammetric techniques on 10 vertical aerial photographs of Tiwankau, Bolivia center for advanced spatial technologies CAST, University of Arkansas Adam Barnes

dem_1972_1m.tif (File size – 71 mb)

Photogrammetric processing was performed on 10 historic vertical aerial photographs from 1972 to produce this digital elevation model (DEM) covering the monumental core and surrounding areas of Tiwanaku.
Ground sample distance – 0.5-m
Coverage – 330-ha
Coordinate system – Arbitrary, based on local coordinate system used by archaeologists.

Please note. This data is distributed under a Creative Commons 3.0 License (see http://creativecommons.org/licenses/by-nc/3.0/ for the full license). You are free to share and remix these data under the condition that you include attribution as provided here. You may not use the data or products in a commercial purpose without additional approvals. Please attach the following credit to all data and products developed there from:
Credit: Museum of Archeology and Anthropology, General Robotics, Automation, Sensing and Perception (GRASP) Lab (University of Pennsylvania) and Center for Advanced Spatial Technologies, (University of Arkansas)
Longer version: Data acquired, processed and distributed by the Center for Advanced Spatial Technologies staff and University of Pennsylvania.

 

Posted in Bolivia, Scanning, Tiwanaku, Bolivia GIS/RS | Tagged , , , , ,

Tiwanaku, Bolivia – Digital Elevation Model (1992)

 

The Center has been involved in a multi-year project in collaboration with Dr. Alexei Vranich at the University of Pennsylvania to scan and document the Pre-Incan site of Tiwanaku, Bolivia.  Read a short synopsis of the project at Tiwanaku Project Details and for full details on the entire survey, refer to Geophysics and Geomatics at Tiwanaku.

For the .jpg, .tif, or .img photogrammetry formats, we recommend the free viewer ArcGIS Explorer Desktop. This free GIS application provides ways to explore and share GIS data.

digital elevation model DEM made using photogrammetric techniques on 2 vertical aerial photographs of Tiwankau, Bolivia center for advanced spatial technologies CAST, University of Arkansas Adam Barnes

dem_1992_1m.tif (File size – 26 mb)

Photogrammetric processing was performed on two historic vertical aerial photographs from 1992 to produce this digital elevation model (DEM) covering the monumental core and surrounding areas of Tiwanaku.
Ground sample distance – 1-m
Coverage – 550-ha
Coordinate system – Arbitrary, based on local coordinate system used by archaeologists.

Please note. This data is distributed under a Creative Commons 3.0 License (see http://creativecommons.org/licenses/by-nc/3.0/ for the full license). You are free to share and remix these data under the condition that you include attribution as provided here. You may not use the data or products in a commercial purpose without additional approvals. Please attach the following credit to all data and products developed there from:
Credit: Museum of Archeology and Anthropology, General Robotics, Automation, Sensing and Perception (GRASP) Lab (University of Pennsylvania) and Center for Advanced Spatial Technologies, (University of Arkansas)
Longer version: Data acquired, processed and distributed by the Center for Advanced Spatial Technologies staff and University of Pennsylvania.

 

Posted in Bolivia, Scanning, Tiwanaku, Bolivia GIS/RS | Tagged , , , , ,

Cuzco, Peru – Vector Line Data of Past Terraces

 

Data collected in 2009 as part of the Computer Modeling of Heritage Resources, Peru.

Vector line data of possible past terraces

tin_of_past_cuzco.zip (ESRI TIN, 1MB)
Collected 2009
Vector line data of possible past terraces. Terraces are based on maps from the 1978 survey and from in situ verification. The view is toward Saqsaywaman.

Please note. This data is distributed under a Creative Commons 3.0 License (see http://creativecommons.org/licenses/by-nc/3.0/ for the full license). You are free to share and remix these data under the condition that you include attribution as provided here. You may not use the data or products in a commercial purpose without additional approvals. Please attach the following credit to all data and products developed there from:
Credit: Instituto Nacional de Cultura, Center for Advanced Spatial Technologies, (University of Arkansas) and Cotsen Institute for Archaeology (UCLA)
Longer version: Data developed under the authority of the Instituto Nacional de Cultura, Vladimir Dávila – Arquitecto del P.A.N Machu Picchu, Director del P.A.N Machu Picchu Direccion Regional de Cultura Cusco and Fernando Astete – National Archaeological Park of Machu Picchu. Data acquired, processed and distributed by the Center for Advanced Spatial Technologies staff (Snow Winters, Malcolm Williamson and Katie Simon) and by students in the 2009 Cotsen Institute for Archaeology (UCLA) Cuzco/Machu Picchu Field School, Alexei Vranich Director.

 

Posted in Cuzco, Peru GIS/RS, Peru | Tagged ,

Cuzco, Peru – TIN of Past Terraces

 

Data collected in 2009 as part of the Computer Modeling of Heritage Resources, Peru.

 

TIN of possible past terraces

cuzco_past.zip (TIF, 2MB)
Collected 2009
Triangulated Irregular Network (TIN) of possible past terraces. Terraces are based on maps from the 1978 survey and from in situ verification. The view is toward Saqsaywaman.

Please note. This data is distributed under a Creative Commons 3.0 License (see http://creativecommons.org/licenses/by-nc/3.0/ for the full license). You are free to share and remix these data under the condition that you include attribution as provided here. You may not use the data or products in a commercial purpose without additional approvals. Please attach the following credit to all data and products developed there from:
Credit: Instituto Nacional de Cultura, Center for Advanced Spatial Technologies, (University of Arkansas) and Cotsen Institute for Archaeology (UCLA)
Longer version: Data developed under the authority of the Instituto Nacional de Cultura, Vladimir Dávila – Arquitecto del P.A.N Machu Picchu, Director del P.A.N Machu Picchu Direccion Regional de Cultura Cusco and Fernando Astete – National Archaeological Park of Machu Picchu. Data acquired, processed and distributed by the Center for Advanced Spatial Technologies staff (Snow Winters, Malcolm Williamson and Katie Simon) and by students in the 2009 Cotsen Institute for Archaeology (UCLA) Cuzco/Machu Picchu Field School, Alexei Vranich Director.

 

Posted in Cuzco, Peru GIS/RS, Peru | Tagged ,

Cuzco, Peru – TIN with Overlaid Walls

 

Data collected in 2009 as part of the Computer Modeling of Heritage Resources, Peru.

TIN with Inca walls overlaid

walls.zip (ESRI shapefile, 2MB)
Collected 2009
TIN (Triangulated Irregular Network) with verified walls overlaid in red. The view is from the tail of the puma, toward Saqsaywaman.

Please note. This data is distributed under a Creative Commons 3.0 License (see http://creativecommons.org/licenses/by-nc/3.0/ for the full license). You are free to share and remix these data under the condition that you include attribution as provided here. You may not use the data or products in a commercial purpose without additional approvals. Please attach the following credit to all data and products developed there from:
Credit: Instituto Nacional de Cultura, Center for Advanced Spatial Technologies, (University of Arkansas) and Cotsen Institute for Archaeology (UCLA)
Longer version: Data developed under the authority of the Instituto Nacional de Cultura, Vladimir Dávila – Arquitecto del P.A.N Machu Picchu, Director del P.A.N Machu Picchu Direccion Regional de Cultura Cusco and Fernando Astete – National Archaeological Park of Machu Picchu. Data acquired, processed and distributed by the Center for Advanced Spatial Technologies staff (Snow Winters, Malcolm Williamson and Katie Simon) and by students in the 2009 Cotsen Institute for Archaeology (UCLA) Cuzco/Machu Picchu Field School, Alexei Vranich Director.

 

 

 

 

Posted in Cuzco, Peru GIS/RS, Peru | Tagged , ,

Cuzco, Peru – Georeferenced Map from 1978

 

Data collected in 2009 as part of the Computer Modeling of Heritage Resources, Peru.

Georeferenced Map of 1978 Survey

cuzco_1978.zip (TIF, 16MB)
Collected 2009
Georeferenced map from 1978 showing existing walls (slightly thicker black). Red features are walls verified by Geomatics students and mapped with attribute data on wall height.

Please note. This data is distributed under a Creative Commons 3.0 License (see http://creativecommons.org/licenses/by-nc/3.0/ for the full license). You are free to share and remix these data under the condition that you include attribution as provided here. You may not use the data or products in a commercial purpose without additional approvals. Please attach the following credit to all data and products developed there from:
Credit: Instituto Nacional de CulturaCenter for Advanced Spatial Technologies, (University of Arkansas) and Cotsen Institute for Archaeology (UCLA)
Longer version: Data developed under the authority of the  Instituto Nacional de Cultura, Vladimir Dávila – Arquitecto del P.A.N Machu Picchu, Director del P.A.N Machu Picchu Direccion Regional de Cultura Cusco and Fernando Astete – National Archaeological Park of Machu Picchu. Data acquired, processed and distributed by the Center for Advanced Spatial Technologies staff (Snow Winters, Malcolm Williamson and Katie Simon) and by students in the 2009 Cotsen Institute for Archaeology (UCLA) Cuzco/Machu Picchu Field School, Alexei Vranich Director.

 

 

Posted in Cuzco, Peru GIS/RS, Peru | Tagged ,

Machu Picchu, Peru – Photogrammetry of Earth Temple (Texture)

 

Data collected in 2009 as part of the Computer Modeling of Heritage Resources, Peru.

Earth_Temple.zip (~740 mb)
High resolution DSLR texture photos (Nikon D200 and D70) for visualization applications.

Please note. This data is distributed under a Creative Commons 3.0 License (see http://creativecommons.org/licenses/by-nc/3.0/ for the full license). You are free to share and remix these data under the condition that you include attribution as provided here. You may not use the data or products in a commercial purpose without additional approvals. Please attach the following credit to all data and products developed there from:
Credit: Instituto Nacional de Cultura, Center for Advanced Spatial Technologies (University of Arkansas) and Cotsen Institute for Archaeology (UCLA)
Longer version: Data developed under the authority of the Instituto Nacional de Cultura, Vladimir Dávila – Arquitecto del P.A.N Machu Picchu, Director del P.A.N Machu Picchu Direccion Regional de Cultura Cusco and Fernando Astete – National Archaeological Park of Machu Picchu. Data acquired, processed and distributed by the Center for Advanced Spatial Technologies staff (Snow Winters, Malcolm Williamson and Katie Simon) and by students in the 2009 Cotsen Institute for Archaeology (UCLA) Cuzco/Machu Picchu Field School, Alexei Vranich Director.

Posted in Machu Picchu, Peru photogram, Peru | Tagged ,

Machu Picchu, Peru – Photogrammetry of Condor Temple (Texture)

 

Data collected in 2009 as part of the Computer Modeling of Heritage Resources, Peru.

 

Temple_Condor.zip (~288 mb)
High resolution DSLR texture photos (Nikon D200 and D70) for visualization applications.

Please note. This data is distributed under a Creative Commons 3.0 License (see http://creativecommons.org/licenses/by-nc/3.0/ for the full license). You are free to share and remix these data under the condition that you include attribution as provided here. You may not use the data or products in a commercial purpose without additional approvals. Please attach the following credit to all data and products developed there from:
Credit: Instituto Nacional de Cultura, Center for Advanced Spatial Technologies (University of Arkansas) and Cotsen Institute for Archaeology (UCLA)
Longer version: Data developed under the authority of the Instituto Nacional de Cultura, Vladimir Dávila – Arquitecto del P.A.N Machu Picchu, Director del P.A.N Machu Picchu Direccion Regional de Cultura Cusco and Fernando Astete – National Archaeological Park of Machu Picchu. Data acquired, processed and distributed by the Center for Advanced Spatial Technologies staff (Snow Winters, Malcolm Williamson and Katie Simon) and by students in the 2009 Cotsen Institute for Archaeology (UCLA) Cuzco/Machu Picchu Field School, Alexei Vranich Director.

Posted in Machu Picchu, Peru photogram, Peru | Tagged ,

Machu Picchu, Peru – Photogrammetry of Condor Temple (Photos)

 

Data collected in 2009 as part of the Computer Modeling of Heritage Resources, Peru.

Temple_Condor.zip (~750 mb)
High resolution DSLR photos (Nikon D200 and D70) for photogrammetric applications.

 

Please note. This data is distributed under a Creative Commons 3.0 License (see http://creativecommons.org/licenses/by-nc/3.0/ for the full license). You are free to share and remix these data under the condition that you include attribution as provided here. You may not use the data or products in a commercial purpose without additional approvals. Please attach the following credit to all data and products developed there from:
Credit: Instituto Nacional de Cultura, Center for Advanced Spatial Technologies (University of Arkansas) and Cotsen Institute for Archaeology (UCLA)
Longer version: Data developed under the authority of the Instituto Nacional de Cultura, Vladimir Dávila – Arquitecto del P.A.N Machu Picchu, Director del P.A.N Machu Picchu Direccion Regional de Cultura Cusco and Fernando Astete – National Archaeological Park of Machu Picchu. Data acquired, processed and distributed by the Center for Advanced Spatial Technologies staff (Snow Winters, Malcolm Williamson and Katie Simon) and by students in the 2009 Cotsen Institute for Archaeology (UCLA) Cuzco/Machu Picchu Field School, Alexei Vranich Director.

Posted in Machu Picchu, Peru photogram, Peru | Tagged ,

Machu Picchu, Peru – Photogrammetry of Main Temple (Texture)

 

Data collected in 2009 as part of the Computer Modeling of Heritage Resources, Peru
 

Principal (Main) Temple

Main_Temple.zip (~799 mb)
High resolution DSLR texture photos (Nikon D200 and D70) for visualization applications.

Please note. This data is distributed under a Creative Commons 3.0 License (see http://creativecommons.org/licenses/by-nc/3.0/ for the full license). You are free to share and remix these data under the condition that you include attribution as provided here. You may not use the data or products in a commercial purpose without additional approvals. Please attach the following credit to all data and products developed there from:
Credit: Instituto Nacional de Cultura, Center for Advanced Spatial Technologies (University of Arkansas) and Cotsen Institute for Archaeology (UCLA)
Longer version: Data developed under the authority of the Instituto Nacional de Cultura, Vladimir Dávila – Arquitecto del P.A.N Machu Picchu, Director del P.A.N Machu Picchu Direccion Regional de Cultura Cusco and Fernando Astete – National Archaeological Park of Machu Picchu. Data acquired, processed and distributed by the Center for Advanced Spatial Technologies staff (Snow Winters, Malcolm Williamson and Katie Simon) and by students in the 2009 Cotsen Institute for Archaeology (UCLA) Cuzco/Machu Picchu Field School, Alexei Vranich Director.

Posted in Machu Picchu, Peru photogram, Peru | Tagged ,

Machu Picchu, Peru – Photogrammetry of Main Temple (Photos)

 

Data collected in 2009 as part of the Computer Modeling of Heritage Resources, Peru.

 

Principal (Main) Temple

Main_Temple.zip (625 mb)
High resolution DSLR photos (Nikon D200 and D70) for photogrammetric applications.

Please note. This data is distributed under a Creative Commons 3.0 License (see http://creativecommons.org/licenses/by-nc/3.0/ for the full license). You are free to share and remix these data under the condition that you include attribution as provided here. You may not use the data or products in a commercial purpose without additional approvals. Please attach the following credit to all data and products developed there from:
Credit: Instituto Nacional de Cultura, Center for Advanced Spatial Technologies (University of Arkansas) and Cotsen Institute for Archaeology (UCLA)
Longer version: Data developed under the authority of the Instituto Nacional de Cultura, Vladimir Dávila – Arquitecto del P.A.N Machu Picchu, Director del P.A.N Machu Picchu Direccion Regional de Cultura Cusco and Fernando Astete – National Archaeological Park of Machu Picchu. Data acquired, processed and distributed by the Center for Advanced Spatial Technologies staff (Snow Winters, Malcolm Williamson and Katie Simon) and by students in the 2009 Cotsen Institute for Archaeology (UCLA) Cuzco/Machu Picchu Field School, Alexei Vranich Director.

Posted in Machu Picchu, Peru photogram, Peru | Tagged ,

Machu Picchu, Peru – Photogrammetry of Intiwatana (Texture)

 

Data collected in 2009 as part of the Computer Modeling of Heritage Resources, Peru

intiwatana

Intiwatana.zip (~741 mb)
High resolution DSLR texture photos (Nikon D200 and D70) for visualization applications.

Please note. This data is distributed under a Creative Commons 3.0 License (see http://creativecommons.org/licenses/by-nc/3.0/ for the full license). You are free to share and remix these data under the condition that you include attribution as provided here. You may not use the data or products in a commercial purpose without additional approvals. Please attach the following credit to all data and products developed there from:
Credit: Instituto Nacional de Cultura, Center for Advanced Spatial Technologies (University of Arkansas) and Cotsen Institute for Archaeology (UCLA)
Longer version: Data developed under the authority of the Instituto Nacional de Cultura, Vladimir Dávila – Arquitecto del P.A.N Machu Picchu, Director del P.A.N Machu Picchu Direccion Regional de Cultura Cusco and Fernando Astete – National Archaeological Park of Machu Picchu. Data acquired, processed and distributed by the Center for Advanced Spatial Technologies staff (Snow Winters, Malcolm Williamson and Katie Simon) and by students in the 2009 Cotsen Institute for Archaeology (UCLA) Cuzco/Machu Picchu Field School, Alexei Vranich Director.

 

Posted in Machu Picchu, Peru photogram, Peru | Tagged ,

Machu Picchu, Peru – Photogrammetry of Intiwatana (Photos)

 

Data collected in 2009 as part of the Computer Modeling of Heritage Resources, Peru.

intiwatana

Intiwatana.zip (~3 gb)
High resolution DSLR photos (Nikon D200 and D70) for photogrammetric applications.

 

Please note. This data is distributed under a Creative Commons 3.0 License (see http://creativecommons.org/licenses/by-nc/3.0/ for the full license). You are free to share and remix these data under the condition that you include attribution as provided here. You may not use the data or products in a commercial purpose without additional approvals. Please attach the following credit to all data and products developed there from:
Credit: Instituto Nacional de Cultura, Center for Advanced Spatial Technologies (University of Arkansas) and Cotsen Institute for Archaeology (UCLA)
Longer version: Data developed under the authority of the Instituto Nacional de Cultura, Vladimir Dávila – Arquitecto del P.A.N Machu Picchu, Director del P.A.N Machu Picchu Direccion Regional de Cultura Cusco and Fernando Astete – National Archaeological Park of Machu Picchu. Data acquired, processed and distributed by the Center for Advanced Spatial Technologies staff (Snow Winters, Malcolm Williamson and Katie Simon) and by students in the 2009 Cotsen Institute for Archaeology (UCLA) Cuzco/Machu Picchu Field School, Alexei Vranich Director.

 

Posted in Machu Picchu, Peru photogram, Peru | Tagged ,

Machu Picchu, Peru – Photogrammetry of Temple of the Sun (Texture)

 

Data collected in 2009 as part of the Computer Modeling of Heritage Resources, Peru.

 

Temple of the Sun

Temple_Sun.zip (~1 gb)
High resolution DSLR texture photos (Nikon D200 and D70) for visualization applications.

Please note. This data is distributed under a Creative Commons 3.0 License (see http://creativecommons.org/licenses/by-nc/3.0/ for the full license). You are free to share and remix these data under the condition that you include attribution as provided here. You may not use the data or products in a commercial purpose without additional approvals. Please attach the following credit to all data and products developed there from:
Credit: Instituto Nacional de Cultura, Center for Advanced Spatial Technologies (University of Arkansas) and Cotsen Institute for Archaeology (UCLA)
Longer version: Data developed under the authority of the Instituto Nacional de Cultura, Vladimir Dávila – Arquitecto del P.A.N Machu Picchu, Director del P.A.N Machu Picchu Direccion Regional de Cultura Cusco and Fernando Astete – National Archaeological Park of Machu Picchu. Data acquired, processed and distributed by the Center for Advanced Spatial Technologies staff (Snow Winters, Malcolm Williamson and Katie Simon) and by students in the 2009 Cotsen Institute for Archaeology (UCLA) Cuzco/Machu Picchu Field School, Alexei Vranich Director.

 

 

Posted in Machu Picchu, Peru photogram, Peru | Tagged ,

Machu Picchu, Peru – Photogrammetry of Temple of the Sun (Photos)

 

Data collected in 2009 as part of the Computer Modeling of Heritage Resources, Peru.

 

Temple of the Sun

Temple_Sun.zip (~6 gb)
High resolution DSLR photos (Nikon D200 and D70) for photogrammetric applications.

 

Please note. This data is distributed under a Creative Commons 3.0 License (see http://creativecommons.org/licenses/by-nc/3.0/ for the full license). You are free to share and remix these data under the condition that you include attribution as provided here. You may not use the data or products in a commercial purpose without additional approvals. Please attach the following credit to all data and products developed there from:
Credit: Instituto Nacional de Cultura, Center for Advanced Spatial Technologies (University of Arkansas) and Cotsen Institute for Archaeology (UCLA)
Longer version: Data developed under the authority of the Instituto Nacional de Cultura, Vladimir Dávila – Arquitecto del P.A.N Machu Picchu, Director del P.A.N Machu Picchu Direccion Regional de Cultura Cusco and Fernando Astete – National Archaeological Park of Machu Picchu. Data acquired, processed and distributed by the Center for Advanced Spatial Technologies staff (Snow Winters, Malcolm Williamson and Katie Simon) and by students in the 2009 Cotsen Institute for Archaeology (UCLA) Cuzco/Machu Picchu Field School, Alexei Vranich Director.

 

 

Posted in Machu Picchu, Peru photogram, Peru | Tagged ,

Tiwanaku, Bolivia – Ortho Image (1992)

 

The Center has been involved in a multi-year project in collaboration with Dr. Alexei Vranich at the University of Pennsylvania to scan and document the Pre-Incan site of Tiwanaku, Bolivia. Read a short synopsis of the project at Tiwanaku Project Details and for full details on the entire survey, refer to Geophysics and Geomatics at Tiwanaku.

For the .jpg, .tif, or .img photogrammetry formats, we recommend the free viewer ArcGIS Explorer Desktop. This free GIS application provides ways to explore and share GIS data.

photogrammetry orthophoto from 1992 photography for Tiwanaku Bolivia made by Adam Barnes center for advanced spatial technologies CAST, University of Arkansas

ortho_1992.tif (File size – 125 mb)

Photogrammetric processing was performed on two historic vertical aerial photographs from 1992 to produce this ortho mosaic covering the monumental core and surrounding areas of Tiwanaku.
Ground sample distance – 20.4-cm
Coverage – 710-ha
Coordinate system – Arbitrary, based on local coordinate system used by archaeologists.

Average Scale Average Flying Height (m) Ground Coverage per Pixel (cm)
1992 Photos 1:16 100 2470 20.4

 

Please note. This data is distributed under a Creative Commons 3.0 License (see http://creativecommons.org/licenses/by-nc/3.0/ for the full license). You are free to share and remix these data under the condition that you include attribution as provided here. You may not use the data or products in a commercial purpose without additional approvals. Please attach the following credit to all data and products developed there from:
Credit: Museum of Archeology and Anthropology, General Robotics, Automation, Sensing and Perception (GRASP) Lab (University of Pennsylvania) and Center for Advanced Spatial Technologies, (University of Arkansas)
Longer version:Data acquired, processed and distributed by the Center for Advanced Spatial Technologies staff and University of Pennsylvania.

Posted in Bolivia, Scanning, Tiwanaku, Bolivia GIS/RS, Tiwanaku, Bolivia photogram | Tagged , , , , ,

Tiwanaku, Bolivia – Ortho Image (1972)

 

The Center has been involved in a multi-year project in collaboration with Dr. Alexei Vranich at the University of Pennsylvania to scan and document the Pre-Incan site of Tiwanaku, Bolivia.  Read a short synopsis of the project at Tiwanaku Project Details and for full details on the entire survey, refer to Geophysics and Geomatics at Tiwanaku.

For the .jpg, .tif, or .img photogrammetry formats, we recommend the free viewer ArcGIS Explorer Desktop. This free GIS application provides ways to explore and share GIS data.

 

photogrammtery 1972 orthophoto tiwanaku bolivia adam barnes center for advanced spatial technologies CAST university of Arkansas

ortho_1972.tif (File size – 745 mb)

Photogrammetric processing was performed on 10 historic vertical aerial photographs from 1972 to produce this ortho mosaic covering the monumental core and surrounding areas of Tiwanaku.
Ground sample distance – 6.5-cm
Coverage – 360-ha
Coordinate system – Arbitrary, based on local coordinate system used by archaeologists.

Average Scale Average Flying Height (m) Ground Coverage per Pixel (cm)
1972 Photos 1:5 150 782 6.5

 

Please note. This data is distributed under a Creative Commons 3.0 License (see http://creativecommons.org/licenses/by-nc/3.0/ for the full license). You are free to share and remix these data under the condition that you include attribution as provided here. You may not use the data or products in a commercial purpose without additional approvals. Please attach the following credit to all data and products developed there from:
Credit: Museum of Archeology and Anthropology, General Robotics, Automation, Sensing and Perception (GRASP) Lab (University of Pennsylvania) and Center for Advanced Spatial Technologies, (University of Arkansas)
Longer version: Data acquired, processed and distributed by the Center for Advanced Spatial Technologies staff and University of Pennsylvania.

Posted in Bolivia, Scanning, Tiwanaku, Bolivia GIS/RS, Tiwanaku, Bolivia photogram | Tagged , , , , ,

Cuzco, Peru – Photogrammetry of Saqsaywaman

 

Data collected in 2009 as part of the Computer Modeling of Heritage Resources, Peru.

For the .jpg, .tif or .img photogrammetry formats, we recommend the free viewerArcGIS Explorer Desktop. This free GIS application provides ways to explore and share GIS data.

Saqsaywaman.zip (JPEGs, 52MB)
Collected 2009
High resolution photos of the Saqsaywaman in Cuzco, Peru.

Please note. This data is distributed under a Creative Commons 3.0 License (see http://creativecommons.org/licenses/by-nc/3.0/ for the full license). You are free to share and remix these data under the condition that you include attribution as provided here. You may not use the data or products in a commercial purpose without additional approvals. Please attach the following credit to all data and products developed there from:
Credit: Instituto Nacional de Cultura, Center for Advanced Spatial Technologies (University of Arkansas) and Cotsen Institute for Archaeology (UCLA)
Longer version: Data developed under the authority of the Instituto Nacional de Cultura, Vladimir Dávila – Arquitecto del P.A.N Machu Picchu, Director del P.A.N Machu Picchu Direccion Regional de Cultura Cusco and Fernando Astete – National Archaeological Park of Machu Picchu. Data acquired, processed and distributed by the Center for Advanced Spatial Technologies staff (Snow Winters, Malcolm Williamson and Katie Simon) and by students in the 2009 Cotsen Institute for Archaeology (UCLA) Cuzco/Machu Picchu Field School, Alexei Vranich Director.

 

 

Posted in Cuzco, Peru photogram, Peru | Tagged , ,

Cuzco, Peru – Photogrammetry of Kuricancha

 

Data collected in 2009 as part of the Computer Modeling of Heritage Resources, Peru.

For the .tif, .jpg and .img formats, we recommend the free viewerArcGIS Explorer Desktop. This free GIS application provides ways to explore and share GIS data.

 

Kuricancha.zip (JPEGs, 22.5MB)
Collected 2009
High resolution photos of the Kuricancha in Cuzco, Peru.

Please note. This data is distributed under a Creative Commons 3.0 License (see http://creativecommons.org/licenses/by-nc/3.0/ for the full license). You are free to share and remix these data under the condition that you include attribution as provided here. You may not use the data or products in a commercial purpose without additional approvals. Please attach the following credit to all data and products developed there from:
Credit: Instituto Nacional de Cultura, Center for Advanced Spatial Technologies (University of Arkansas) and Cotsen Institute for Archaeology (UCLA)
Longer version: Data developed under the authority of the Instituto Nacional de Cultura, Vladimir Dávila – Arquitecto del P.A.N Machu Picchu, Director del P.A.N Machu Picchu Direccion Regional de Cultura Cusco and Fernando Astete – National Archaeological Park of Machu Picchu. Data acquired, processed and distributed by the Center for Advanced Spatial Technologies staff (Snow Winters, Malcolm Williamson and Katie Simon) and by students in the 2009 Cotsen Institute for Archaeology (UCLA) Cuzco/Machu Picchu Field School, Alexei Vranich Director.

 

Posted in Cuzco, Peru photogram, Peru | Tagged , ,

University of Arkansas, Fayetteville – Photogrammetry Mosaic Northwest

 

Data collected on December 10, 2010 by Eagle Forestry Services, Inc. Monticello, Arkansas

For the geotiff (TIF) format, we recommend the free viewer ArcGIS Explorer Desktop. This free GIS application provides ways to explore and share GIS data.

 

 

campusmosaic_2010_nw.TIF (~346 mb)
~ 6 inch resolution, fully orthorectified imagery using the Trimble Digital Sensor System (DSS). UTM Zone 15N coordinate system and WGS84 datum.

Please note. This data is distributed under a Creative Commons 3.0 License (see http://creativecommons.org/licenses/by-nc/3.0/ for the full license). You are free to share and remix these data under the condition that you include attribution as provided here. You may not use the data or products in a commercial purpose without additional approvals. Please attach the following credit to all data and products developed there from:Credit: Center for Advanced Spatial Technologies, (University of Arkansas) and Eagle Forestry Services, Inc. Monticello, Arkansas.
Longer version: Data collected on December 10, 2010 by Eagle Forestry Services, Inc. Monticello, Arkansas. Data distributed by the Center for Advanced Spatial Technologies.

Posted in United States, University of Arkansas, Fayetteville photogram | Tagged , , ,

University of Arkansas, Fayetteville – Photogrammetry Mosaic Northeast

 

Data collected on December 10, 2010 by Eagle Forestry Services, Inc. Monticello, Arkansas

For the geotiff (TIF) format, we recommend the free viewerArcGIS Explorer Desktop. This free GIS application provides ways to explore and share GIS data.

 

 

campusmosaic_2010_ne.TIF (~334 mb)
~ 6 inch resolution, fully orthorectified imagery using the Trimble Digital Sensor System (DSS). UTM Zone 15N coordinate system and WGS84 datum.

Please note. This data is distributed under a Creative Commons 3.0 License (see http://creativecommons.org/licenses/by-nc/3.0/ for the full license). You are free to share and remix these data under the condition that you include attribution as provided here. You may not use the data or products in a commercial purpose without additional approvals. Please attach the following credit to all data and products developed there from:
Credit: Center for Advanced Spatial Technologies, (University of Arkansas) and Eagle Forestry Services, Inc. Monticello, Arkansas.
Longer version: Data collected on December 10, 2010 by Eagle Forestry Services, Inc. Monticello, Arkansas. Data distributed by the Center for Advanced Spatial Technologies.

Posted in United States, University of Arkansas, Fayetteville photogram | Tagged , , ,

University of Arkansas, Fayetteville – Photogrammetry Mosaic Northeast

 

Data collected on December 10, 2010 by Eagle Forestry Services, Inc. Monticello, Arkansas

For the geotiff (TIF) format, we recommend the free viewerArcGIS Explorer Desktop. This free GIS application provides ways to explore and share GIS data.

 

 

campusmosaic_2010_ne.TIF (~334 mb)
~ 6 inch resolution, fully orthorectified imagery using the Trimble Digital Sensor System (DSS). UTM Zone 15N coordinate system and WGS84 datum.

Please note. This data is distributed under a Creative Commons 3.0 License (see http://creativecommons.org/licenses/by-nc/3.0/ for the full license). You are free to share and remix these data under the condition that you include attribution as provided here. You may not use the data or products in a commercial purpose without additional approvals. Please attach the following credit to all data and products developed there from:
Credit: Center for Advanced Spatial Technologies, (University of Arkansas) and Eagle Forestry Services, Inc. Monticello, Arkansas.
Longer version: Data collected on December 10, 2010 by Eagle Forestry Services, Inc. Monticello, Arkansas. Data distributed by the Center for Advanced Spatial Technologies.

Posted in United States, University of Arkansas, Fayetteville photogram | Tagged , , ,

University of Arkansas, Fayetteville – Photogrammetry Mosaic Southeast

 

Data collected on December 10, 2010 by Eagle Forestry Services, Inc. Monticello, Arkansas

For the geotiff (TIF) format, we recommend the free viewerArcGIS Explorer Desktop. This free GIS application provides ways to explore and share GIS data.

 

 

campusmosaic_2010_se.TIF (~338 mb)
~ 6 inch resolution, fully orthorectified imagery using the Trimble Digital Sensor System (DSS). UTM Zone 15N coordinate system and WGS84 datum.

Please note. This data is distributed under a Creative Commons 3.0 License (see http://creativecommons.org/licenses/by-nc/3.0/ for the full license). You are free to share and remix these data under the condition that you include attribution as provided here. You may not use the data or products in a commercial purpose without additional approvals. Please attach the following credit to all data and products developed there from:
Credit: Center for Advanced Spatial Technologies, (University of Arkansas) and Eagle Forestry Services, Inc. Monticello, Arkansas.
Longer version: Data collected on December 10, 2010 by Eagle Forestry Services, Inc. Monticello, Arkansas. Data distributed by the Center for Advanced Spatial Technologies.

Posted in United States, University of Arkansas, Fayetteville photogram | Tagged , , ,

University of Arkansas, Fayetteville – Photogrammetry Mosaic Southwest

 

Data collected on December 10, 2010 by Eagle Forestry Services, Inc. Monticello, Arkansas

For the geotiff (TIF) format, we recommend the free viewerArcGIS Explorer Desktop. This free GIS application provides ways to explore and share GIS data.

 

 

campusmosaic_2010_sw.TIF (~329 mb)
~ 6 inch resolution, fully orthorectified imagery using the Trimble Digital Sensor System (DSS). UTM Zone 15N coordinate system and WGS84 datum.

Please note. This data is distributed under a Creative Commons 3.0 License (see http://creativecommons.org/licenses/by-nc/3.0/ for the full license). You are free to share and remix these data under the condition that you include attribution as provided here. You may not use the data or products in a commercial purpose without additional approvals. Please attach the following credit to all data and products developed there from:
Credit: Center for Advanced Spatial Technologies, (University of Arkansas) and Eagle Forestry Services, Inc. Monticello, Arkansas.
Longer version: Data collected on December 10, 2010 by Eagle Forestry Services, Inc. Monticello, Arkansas. Data distributed by the Center for Advanced Spatial Technologies.

Posted in United States, University of Arkansas, Fayetteville photogram | Tagged , , ,

University of Arkansas, Fayetteville – Photogrammetry Mosaic

 

Data collected on December 10, 2010 by Eagle Forestry Services, Inc. Monticello, Arkansas

For the geotiff (TIF) format, we recommend ArcGIS Explorer Desktop. This viewer is a free GIS application providing ways to explore and share GIS data.

UAcampusmosaic_2010.tif (~165 gb)
~ 6 inch resolution, fully orthorectified imagery using the Trimble Digital Sensor System (DSS). UTM Zone 15N coordinate system and WGS84 datum.

Please note. This data is distributed under a Creative Commons 3.0 License (see http://creativecommons.org/licenses/by-nc/3.0/ for the full license). You are free to share and remix these data under the condition that you include attribution as provided here. You may not use the data or products in a commercial purpose without additional approvals. Please attach the following credit to all data and products developed there from:
Credit: Center for Advanced Spatial Technologies, (University of Arkansas) and Eagle Forestry Services, Inc. Monticello, Arkansas.
Longer version: Data collected on December 10, 2010 by Eagle Forestry Services, Inc. Monticello, Arkansas. Data distributed by the Center for Advanced Spatial Technologies.

Posted in United States, University of Arkansas, Fayetteville photogram | Tagged , , ,

Ostia Antica, Italy – Capitoline Temple

 

Using the Optech ILRIS-3D laser scanner and the Konica-Minolta VIVID 9i, data was collected in 2007 as part of the Ostia Antica 3D Scanning Project.

Several data sets from the Ostia Antica survey are available in these posts as point clouds in the Polyworks PWK format (as IMInspect projects).

For the PWK formats we recommend using the free Polyworks | Viewer (previously the Polyworks IM Viewer), and for the obj files we recommend Rapidform Geomagic Verify Viewer (previously the Rapidform Explorer).

To open a PWZIP file – click on the .pwzip link below and save the file; if you are using Mozilla Firefox, right click on the link and save the file.  Open IMView, left click on “File” at the top of the screen.  Select “Open project”, then “Add Workspace” and browse to the .pwzip’s saved location.  Next, choose a local location to extract the files.  The files will unzip and the project will appear in IMView’s workspace file structure.  Left click on the project and open.

 

 

 

 

Capitoline_Temple.pwzip (53.8 mb)
Collected 2007

Please note. This data is distributed under a Creative Commons 3.0 License  (seehttp://creativecommons.org/licenses/by-nc/3.0/ for the full license). You are free to share and remix these data under the condition that you include attribution as provided here.  You may not use the data or products in a commercial purpose without additional approvals. Please attach the following credit to all data and products developed there from:

Credit:

University of Arkansas, Soprintendenza per i Beni Arcaelogici di OstiaLeica Geosystems, National Science Foundation, Center for Advanced Spatial Technologies

Longer Version:

This project was made possible through funding from the University of Arkansas Honors College and with equipment provided through Leica Geosystems Chair in Geospatial Imaging Funding and National Science Foundation Grant BCS 0321286.  Additional assistance was provided by Soprintendenza per i Beni Arcaelogici di Ostia including Angelo Pellegrino, Direttore degli Scavi di Ostia Antica.  The project was coordinated by the University of Arkansas:  Honors College –  Dean Robert McMath; Classical Studies Program, Department of Foreign Languages – Dr. David Fredrick; School of Architecture – Tim DeNoble; Rome Center for Architecture and Humanities – Prof. Davide Vitali, Director, and Prof. Francesco Bedeschi.  Further coordination was provided by the Center for Advanced Spatial Technologies (CAST), University of Arkansas – Director W. Fredrick Limp, Prof. Jackson Cothren, and the data acquisition and processing team including Adam Barnes, Christopher Goodmaster, Malcolm Williamson, and Caitlin Stevens.  An undergraduate, honors colloquium, Visualizing the Roman City, worked more extensively with the data.  A paper was presented on the class at the April 2008 Computer Applications in Archaeology Conference in Budapest titled Visualizing the Roman City: Viewing the past through multidisciplinary eyes.

 

 

 

Posted in Italy, Ostia Antica, Italy scanning | Tagged , , , , , ,

Ostia Antica, Italy – Baths of the Seven Sages

 

Using the Optech ILRIS-3D laser scanner and the Konica-Minolta VIVID 9i, data was collected in 2007 as part of the Ostia Antica 3D Scanning Project.

Several data sets from the Ostia Antica survey are available in these posts as point clouds in the Polyworks PWK format (as IMInspect projects).

For the PWK formats we recommend using the free Polyworks | Viewer (previously the Polyworks IM Viewer), and for the obj files we recommend Rapidform Geomagic Verify Viewer (previously the Rapidform Explorer).

To open a PWZIP file – click on the .pwzip link below and save the file; if you are using Mozilla Firefox, right click on the link and save the file.  Open IMView, left click on “File” at the top of the screen.  Select “Open project”, then “Add Workspace” and browse to the .pwzip’s saved location.  Next, choose a local location to extract the files.  The files will unzip and the project will appear in IMView’s workspace file structure.  Left click on the project and open.

 

 

 

 

 

 

 

 

 

Bath_of_Seven_Sages_Interior.pwzip (450 mb)
Collected 2007
Bath_of_Seven_Sages_Inteior.pwzip combines scans of the interior of one section of the House of Charioteers Complex.

Please note. This data is distributed under a Creative Commons 3.0 License  (seehttp://creativecommons.org/licenses/by-nc/3.0/ for the full license). You are free to share and remix these data under the condition that you include attribution as provided here.  You may not use the data or products in a commercial purpose without additional approvals. Please attach the following credit to all data and products developed there from:

Credit:

University of Arkansas, Soprintendenza per i Beni Arcaelogici di OstiaLeica Geosystems, National Science Foundation, Center for Advanced Spatial Technologies

Longer Version:

This project was made possible through funding from the University of Arkansas Honors College and with equipment provided through Leica Geosystems Chair in Geospatial Imaging Funding and National Science Foundation Grant BCS 0321286.  Additional assistance was provided by Soprintendenza per i Beni Arcaelogici di Ostia including Angelo Pellegrino, Direttore degli Scavi di Ostia Antica.  The project was coordinated by the University of Arkansas:  Honors College –  Dean Robert McMath; Classical Studies Program, Department of Foreign Languages – Dr. David Fredrick; School of Architecture – Tim DeNoble; Rome Center for Architecture and Humanities – Prof. Davide Vitali, Director, and Prof. Francesco Bedeschi.  Further coordination was provided by the Center for Advanced Spatial Technologies (CAST), University of Arkansas – Director W. Fredrick Limp, Prof. Jackson Cothren, and the data acquisition and processing team including Adam Barnes, Christopher Goodmaster, Malcolm Williamson, and Caitlin Stevens.  An undergraduate, honors colloquium, Visualizing the Roman City, worked more extensively with the data.  A paper was presented on the class at the April 2008 Computer Applications in Archaeology Conference in Budapest titled Visualizing the Roman City: Viewing the past through multidisciplinary eyes.

Posted in Italy, Ostia Antica, Italy scanning | Tagged , , , , , ,

Ostia Antica, Italy – House of the Charioteers Exterior

 

Using the Optech ILRIS-3D laser scanner and the Konica-Minolta VIVID 9i, data was collected in 2007 as part of the Ostia Antica 3D Scanning Project.

Several data sets from the Ostia Antica survey are available in these posts as point clouds in the Polyworks PWK format (as IMInspect projects).

For the PWK formats we recommend using the free Polyworks | Viewer (previously the Polyworks IM Viewer), and for the obj files we recommend Rapidform Geomagic Verify Viewer (previously the Rapidform Explorer).

To open a PWZIP file – click on the .pwzip link below and save the file; if you are using Mozilla Firefox, right click on the link and save the file.  Open IMView, left click on “File” at the top of the screen.  Select “Open project”, then “Add Workspace” and browse to the .pwzip’s saved location.  Next, choose a local location to extract the files.  The files will unzip and the project will appear in IMView’s workspace file structure.  Left click on the project and open.

 

 

 

 

 

 

 

 

Charioteer_Exterior.pwzip (152 mb)
Collected 2007; aerial perspective view
Charioteer_Exterior.pwzip combines scans taken from the exterior of the House of the Charioteers and Baths of the Seven Sages.  This structure was very dense and complex, resulting in the interior and exterior being separated for processing.

Please note. This data is distributed under a Creative Commons 3.0 License  (seehttp://creativecommons.org/licenses/by-nc/3.0/ for the full license). You are free to share and remix these data under the condition that you include attribution as provided here.  You may not use the data or products in a commercial purpose without additional approvals. Please attach the following credit to all data and products developed there from:

Credit:

University of Arkansas, Soprintendenza per i Beni Arcaelogici di OstiaLeica Geosystems, National Science Foundation, Center for Advanced Spatial Technologies

Longer Version:

This project was made possible through funding from the University of Arkansas Honors College and with equipment provided through Leica Geosystems Chair in Geospatial Imaging Funding and National Science Foundation Grant BCS 0321286.  Additional assistance was provided by Soprintendenza per i Beni Arcaelogici di Ostia including Angelo Pellegrino, Direttore degli Scavi di Ostia Antica.  The project was coordinated by the University of Arkansas:  Honors College –  Dean Robert McMath; Classical Studies Program, Department of Foreign Languages – Dr. David Fredrick; School of Architecture – Tim DeNoble; Rome Center for Architecture and Humanities – Prof. Davide Vitali, Director, and Prof. Francesco Bedeschi.  Further coordination was provided by the Center for Advanced Spatial Technologies (CAST), University of Arkansas – Director W. Fredrick Limp, Prof. Jackson Cothren, and the data acquisition and processing team including Adam Barnes, Christopher Goodmaster, Malcolm Williamson, and Caitlin Stevens.  An undergraduate, honors colloquium, Visualizing the Roman City, worked more extensively with the data.  A paper was presented on the class at the April 2008 Computer Applications in Archaeology Conference in Budapest titled Visualizing the Roman City: Viewing the past through multidisciplinary eyes.

 

 

 

 

Posted in Italy, Ostia Antica, Italy scanning | Tagged , , , , , ,

Ostia Antica, Italy – Casa Giordino Insula

 

Using the Optech ILRIS-3D laser scanner and the Konica-Minolta VIVID 9i, data was collected in 2007 as part of the Ostia Antica 3D Scanning Project.

Several data sets from the Ostia Antica survey are available in these posts as point clouds in the Polyworks PWK format (as IMInspect projects).

For the PWK formats we recommend using the free Polyworks | Viewer (previously the Polyworks IM Viewer), and for the obj files we recommend Rapidform Geomagic Verify Viewer (previously the Rapidform Explorer).

To open a PWZIP file – click on the .pwzip link below and save the file; if you are using Mozilla Firefox, right click on the link and save the file.  Open IMView, left click on “File” at the top of the screen.  Select “Open project”, then “Add Workspace” and browse to the .pwzip’s saved location.  Next, choose a local location to extract the files.  The files will unzip and the project will appear in IMView’s workspace file structure.  Left click on the project and open.

 

 

 

 

 

 

 

 

 

 

 

 

Casa_Giordino.pwzip (568 mb)
Collected 2007; top view

Please note. This data is distributed under a Creative Commons 3.0 License  (seehttp://creativecommons.org/licenses/by-nc/3.0/ for the full license). You are free to share and remix these data under the condition that you include attribution as provided here.  You may not use the data or products in a commercial purpose without additional approvals. Please attach the following credit to all data and products developed there from:

Credit:

University of Arkansas, Soprintendenza per i Beni Arcaelogici di OstiaLeica Geosystems, National Science Foundation, Center for Advanced Spatial Technologies

Longer Version:

This project was made possible through funding from the University of Arkansas Honors College and with equipment provided through Leica Geosystems Chair in Geospatial Imaging Funding and National Science Foundation Grant BCS 0321286.  Additional assistance was provided by Soprintendenza per i Beni Arcaelogici di Ostia including Angelo Pellegrino, Direttore degli Scavi di Ostia Antica.  The project was coordinated by the University of Arkansas:  Honors College –  Dean Robert McMath; Classical Studies Program, Department of Foreign Languages – Dr. David Fredrick; School of Architecture – Tim DeNoble; Rome Center for Architecture and Humanities – Prof. Davide Vitali, Director, and Prof. Francesco Bedeschi.  Further coordination was provided by the Center for Advanced Spatial Technologies (CAST), University of Arkansas – Director W. Fredrick Limp, Prof. Jackson Cothren, and the data acquisition and processing team including Adam Barnes, Christopher Goodmaster, Malcolm Williamson, and Caitlin Stevens.  An undergraduate, honors colloquium, Visualizing the Roman City, worked more extensively with the data.  A paper was presented on the class at the April 2008 Computer Applications in Archaeology Conference in Budapest titled Visualizing the Roman City: Viewing the past through multidisciplinary eyes.

 

 

 

 

 

 

Posted in Italy, Ostia Antica, Italy scanning | Tagged , , , , , ,

Virtual Hampson Museum, Arkansas USA – Artifact BR_1503

 

This data were collected as part of the Virtual Hampson Museum Project.  The Virtual Hampson Museum contains nearly 450 3D digital artifacts from the collections located at Hampson Archeological Museum State Park in Wilson, AR.  These artifacts were scanned in full color with the Konica Minolta VIVID 9i laser scanner and were processed in the Polyworks and Rapidform processing suites.  To see more data, be sure and visit the Virtual Hampson Museum.

Data are available for five object in
1)      an original high resolution mesh (OBJ format)
2)      a decimated low resolution mesh (3D PDF).

 

 

 

 

 

 

 

Download high resolution OBJ file for BR_1503

Download lower resolution PDF file for BR_1503

Please note. This data is distributed under a Creative Commons 3.0 License  (seehttp://creativecommons.org/licenses/by-nc/3.0/ for the full license). You are free to share and remix these data under the condition that you include attribution as provided here.  You may not use the data or products in a commercial purpose without additional approvals. Please attach the following credit to all data and products developed there from:

Credit:

Arkansas Natural and Cultural Resources Council, Hampson Museum Staff, Arkansas Department of Parks and TourismArkansas Archeological Survey, Center for Advanced Spatial Technologies, University of Arkansas

Longer version:

The Hampson Virtual Museum was made possible by two granta from the Arkansas Natural and Cultural Resources Council in June 2007and 2008.  Assistance provided by Randy Dennis – ANCRC Program Manager.  Additional assistance was provided by Arkansas Department of Parks and Tourism and the staff of the Hampson Museum including: Marlon Mowdy – Park Superintendent, Hampson Archaeological Museum State Park; Richard Davies – Executive Director; Greg Butts – State Parks Director.  The following staff of the Arkansas Archeological Survey provided invaluable assistance:  Robert Mainfort – Artifact descriptions, 3D Nodena visualization advice; Tom Green – Director.  The Center for Advance Spatial Technologies Virtual Hampson Museum Development Team was made up of: Angie Payne – Project Director, 3D Artist of Upper Nodena Village; Snow Winters-Sasser – 3D Artist of Upper Nodena Village; Keenan Cole – Flash website and database development; Katie Simon – Artifact scanning, processing, and Access database developer; Stephanie Sullivan – Artifact processing; Scott Smallwood – Artifact scanning and processing; Christopher Goodmaster – Artifact scanning and processing; Caitlin Stevens – Artifact processing; Duncan McKinnon – Artifact scanning; Fred Limp – CAST Director and Principal Investigator, Author of Nodena 3D Visualization FAQ Section

Posted in United States, Virtual Hampson Musem | Tagged , , , , , ,

Virtual Hampson Museum, Arkansas USA – Artifact Ark_HM_1260A

 

This data were collected as part of the Virtual Hampson Museum Project.  The Virtual Hampson Museum contains nearly 450 3D digital artifacts from the collections located at Hampson Archeological Museum State Park in Wilson, AR.  These artifacts were scanned in full color with the Konica Minolta VIVID 9i laser scanner and were processed in the Polyworks and Rapidform processing suites.  To see more data, be sure and visit the Virtual Hampson Museum.

Data are available for five object in
1)      an original high resolution mesh (OBJ format)
2)      a decimated low resolution mesh (3D PDF).

 

 

 

 

 

 

 

 

Download high resolution OBJ file for Ark_HM_1260A

Download lower resolution PDF file for Ark_HM_1260A

Please note. This data is distributed under a Creative Commons 3.0 License  (seehttp://creativecommons.org/licenses/by-nc/3.0/ for the full license). You are free to share and remix these data under the condition that you include attribution as provided here.  You may not use the data or products in a commercial purpose without additional approvals. Please attach the following credit to all data and products developed there from:

Credit:

Arkansas Natural and Cultural Resources Council, Hampson Museum Staff, Arkansas Department of Parks and TourismArkansas Archeological Survey, Center for Advanced Spatial Technologies, University of Arkansas

Longer version:

The Hampson Virtual Museum was made possible by two granta from the Arkansas Natural and Cultural Resources Council in June 2007and 2008.  Assistance provided by Randy Dennis – ANCRC Program Manager.  Additional assistance was provided by Arkansas Department of Parks and Tourism and the staff of the Hampson Museum including: Marlon Mowdy – Park Superintendent, Hampson Archaeological Museum State Park; Richard Davies – Executive Director; Greg Butts – State Parks Director.  The following staff of the Arkansas Archeological Survey provided invaluable assistance:  Robert Mainfort – Artifact descriptions, 3D Nodena visualization advice; Tom Green – Director.  The Center for Advance Spatial Technologies Virtual Hampson Museum Development Team was made up of: Angie Payne – Project Director, 3D Artist of Upper Nodena Village; Snow Winters-Sasser – 3D Artist of Upper Nodena Village; Keenan Cole – Flash website and database development; Katie Simon – Artifact scanning, processing, and Access database developer; Stephanie Sullivan – Artifact processing; Scott Smallwood – Artifact scanning and processing; Christopher Goodmaster – Artifact scanning and processing; Caitlin Stevens – Artifact processing; Duncan McKinnon – Artifact scanning; Fred Limp – CAST Director and Principal Investigator, Author of Nodena 3D Visualization FAQ Section

Posted in United States, Virtual Hampson Musem | Tagged , , , , , ,

Virtual Hampson Museum, Arkansas USA – Artifact Ark_HM_792

 

This data were collected as part of the Virtual Hampson Museum Project.  The Virtual Hampson Museum contains nearly 450 3D digital artifacts from the collections located at Hampson Archeological Museum State Park in Wilson, AR.  These artifacts were scanned in full color with the Konica Minolta VIVID 9i laser scanner and were processed in the Polyworks and Rapidform processing suites.  To see more data, be sure and visit the Virtual Hampson Museum.

Data are available for five object in
1)      an original high resolution mesh (OBJ format)
2)      a decimated low resolution mesh (3D PDF).

 

 

 

 

 

 

Download high resolution OBJ file for Ark_HM_792

Download lower resolution PDF file for Ark_HM_792

Please note. This data is distributed under a Creative Commons 3.0 License  (seehttp://creativecommons.org/licenses/by-nc/3.0/ for the full license). You are free to share and remix these data under the condition that you include attribution as provided here.  You may not use the data or products in a commercial purpose without additional approvals. Please attach the following credit to all data and products developed there from:

Credit:

Arkansas Natural and Cultural Resources Council, Hampson Museum Staff, Arkansas Department of Parks and TourismArkansas Archeological Survey, Center for Advanced Spatial Technologies, University of Arkansas

Longer version:

The Hampson Virtual Museum was made possible by two granta from the Arkansas Natural and Cultural Resources Council in June 2007and 2008.  Assistance provided by Randy Dennis – ANCRC Program Manager.  Additional assistance was provided by Arkansas Department of Parks and Tourism and the staff of the Hampson Museum including: Marlon Mowdy – Park Superintendent, Hampson Archaeological Museum State Park; Richard Davies – Executive Director; Greg Butts – State Parks Director.  The following staff of the Arkansas Archeological Survey provided invaluable assistance:  Robert Mainfort – Artifact descriptions, 3D Nodena visualization advice; Tom Green – Director.  The Center for Advance Spatial Technologies Virtual Hampson Museum Development Team was made up of: Angie Payne – Project Director, 3D Artist of Upper Nodena Village; Snow Winters-Sasser – 3D Artist of Upper Nodena Village; Keenan Cole – Flash website and database development; Katie Simon – Artifact scanning, processing, and Access database developer; Stephanie Sullivan – Artifact processing; Scott Smallwood – Artifact scanning and processing; Christopher Goodmaster – Artifact scanning and processing; Caitlin Stevens – Artifact processing; Duncan McKinnon – Artifact scanning; Fred Limp – CAST Director and Principal Investigator, Author of Nodena 3D Visualization FAQ Section

Posted in United States, Virtual Hampson Musem | Tagged , , , , , ,

Virtual Hampson Museum, Arkansas USA – Artifact Ark_HM_730

 

This data were collected as part of the Virtual Hampson Museum Project.  The Virtual Hampson Museum contains nearly 450 3D digital artifacts from the collections located at Hampson Archeological Museum State Park in Wilson, AR.  These artifacts were scanned in full color with the Konica Minolta VIVID 9i laser scanner and were processed in the Polyworks and Rapidform processing suites.  To see more data, be sure and visit the Virtual Hampson Museum.

Data are available for five object in
1)      an original high resolution mesh (OBJ format)
2)      a decimated low resolution mesh (3D PDF).

 

 

 

 

 

 

 

 

Download high resolution OBJ file for Ark_HM_730

Download lower resolution PDF file for Ark_HM_730

Please note. This data is distributed under a Creative Commons 3.0 License  (seehttp://creativecommons.org/licenses/by-nc/3.0/ for the full license). You are free to share and remix these data under the condition that you include attribution as provided here.  You may not use the data or products in a commercial purpose without additional approvals. Please attach the following credit to all data and products developed there from:

Credit:

Arkansas Natural and Cultural Resources Council, Hampson Museum Staff, Arkansas Department of Parks and TourismArkansas Archeological Survey, Center for Advanced Spatial Technologies, University of Arkansas

Longer version:

The Hampson Virtual Museum was made possible by two granta from the Arkansas Natural and Cultural Resources Council in June 2007and 2008.  Assistance provided by Randy Dennis – ANCRC Program Manager.  Additional assistance was provided by Arkansas Department of Parks and Tourism and the staff of the Hampson Museum including: Marlon Mowdy – Park Superintendent, Hampson Archaeological Museum State Park; Richard Davies – Executive Director; Greg Butts – State Parks Director.  The following staff of the Arkansas Archeological Survey provided invaluable assistance:  Robert Mainfort – Artifact descriptions, 3D Nodena visualization advice; Tom Green – Director.  The Center for Advance Spatial Technologies Virtual Hampson Museum Development Team was made up of: Angie Payne – Project Director, 3D Artist of Upper Nodena Village; Snow Winters-Sasser – 3D Artist of Upper Nodena Village; Keenan Cole – Flash website and database development; Katie Simon – Artifact scanning, processing, and Access database developer; Stephanie Sullivan – Artifact processing; Scott Smallwood – Artifact scanning and processing; Christopher Goodmaster – Artifact scanning and processing; Caitlin Stevens – Artifact processing; Duncan McKinnon – Artifact scanning; Fred Limp – CAST Director and Principal Investigator, Author of Nodena 3D Visualization FAQ Section

Posted in United States, Virtual Hampson Musem | Tagged , , , , , ,

Virtual Hampson Museum, Arkansas USA – Artifact Ark_HM_314

 

This data were collected as part of the Virtual Hampson Museum Project.  The Virtual Hampson Museum contains nearly 450 3D digital artifacts from the collections located at Hampson Archeological Museum State Park in Wilson, AR.  These artifacts were scanned in full color with the Konica Minolta VIVID 9i laser scanner and were processed in the Polyworks and Rapidform processing suites.  To see more data, be sure and visit the Virtual Hampson Museum.

Data are available for five object in
1)      an original high resolution mesh (OBJ format)
2)      a decimated low resolution mesh (3D PDF).

Download high resolution OBJ file for Ark_HM_314

Download lower resolution PDF file for Ark_HM_314

Please note. This data is distributed under a Creative Commons 3.0 License  (seehttp://creativecommons.org/licenses/by-nc/3.0/ for the full license). You are free to share and remix these data under the condition that you include attribution as provided here.  You may not use the data or products in a commercial purpose without additional approvals. Please attach the following credit to all data and products developed there from:

Credit:

Arkansas Natural and Cultural Resources Council, Hampson Museum Staff, Arkansas Department of Parks and TourismArkansas Archeological Survey, Center for Advanced Spatial Technologies, University of Arkansas

Longer version:

The Hampson Virtual Museum was made possible by two granta from the Arkansas Natural and Cultural Resources Council in June 2007and 2008.  Assistance provided by Randy Dennis – ANCRC Program Manager.  Additional assistance was provided by Arkansas Department of Parks and Tourism and the staff of the Hampson Museum including: Marlon Mowdy – Park Superintendent, Hampson Archaeological Museum State Park; Richard Davies – Executive Director; Greg Butts – State Parks Director.  The following staff of the Arkansas Archeological Survey provided invaluable assistance:  Robert Mainfort – Artifact descriptions, 3D Nodena visualization advice; Tom Green – Director.  The Center for Advance Spatial Technologies Virtual Hampson Museum Development Team was made up of: Angie Payne – Project Director, 3D Artist of Upper Nodena Village; Snow Winters-Sasser – 3D Artist of Upper Nodena Village; Keenan Cole – Flash website and database development; Katie Simon – Artifact scanning, processing, and Access database developer; Stephanie Sullivan – Artifact processing; Scott Smallwood – Artifact scanning and processing; Christopher Goodmaster – Artifact scanning and processing; Caitlin Stevens – Artifact processing; Duncan McKinnon – Artifact scanning; Fred Limp – CAST Director and Principal Investigator, Author of Nodena 3D Visualization FAQ Section

Posted in Scanning, United States, Virtual Hampson Musem | Tagged , , , , , ,

Machu Picchu, Peru – Water Mirrors Room

In 2005 and 2009, researchers from CAST used the Optech ILRIS-3D laser scanner to scan the ancient ruins of Machu Picchu in Peru.  To read more about the survey, please visit the section on Machu Picchu Project Details.

Several data sets from the Machu Picchu survey are available here as point clouds in the Polyworks PWK format (as IMInspect projects) or .obj polygonal mesh.

For the PWK formats we recommend using the free Polyworks | Viewer (previously the Polyworks IM Viewer), and for the obj files we recommend Rapidform Geomagic Verify Viewer (previously the Rapidform Explorer).

To open a PWZIP file – click on the .pwzip link below and save the file; if you are using Mozilla Firefox, right click on the link and save the file.  Open IMView, left click on “File” at the top of the screen.  Select “Open project”, then “Add Workspace” and browse to the .pwzip’s saved location.  Next, choose a local location to extract the files.  The files will unzip and the project will appear in IMView’s workspace file structure.  Left click on the project and open.

 

Water Mirrors Room (Conjunto 16, room 1)

Water Mirrors Room (.zip)
Water Mirrors Room reduced (.zip)

(.obj polygonal mesh available in original resolution and 50% reduced resolution for faster download)

 

Please note. This data is distributed under a Creative Commons 3.0 License  (seehttp://creativecommons.org/licenses/by-nc/3.0/ for the full license). You are free to share and remix these data under the condition that you include attribution as provided here.  You may not use the data or products in a commercial purpose without additional approvals. Please attach the following credit to all data and products developed there from:

Credit:

Instituto Nacional de CulturaCenter for Advanced Spatial Technologies, (University of Arkansas) andCotsen Institute for Archaeology (UCLA)

Longer version:

Data developed under the authority of the Instituto Nacional de Cultura, Vladimir Dávila – Arquitecto del P.A.N Machu Picchu, Director del P.A.N Machu Picchu Direccion Regional de Cultura Cusco and Fernando Astete – National Archaeological Park of Machu Picchu. Data acquired, processed and distributed by the Center for Advanced Spatial Technologies staff (Snow Winters, Malcolm Williamson and Katie Simon) and by students in the 2009 Cotsen Institute for Archaeology  (UCLA) Cuzco/Machu Picchu Field School, Alexei Vranich Director.

Posted in Machu Picchu, Peru scanning, Peru, Scanning | Tagged , ,

Machu Picchu, Peru – Knob Room

In 2005 and 2009, researchers from CAST used the Optech ILRIS-3D laser scanner to scan the ancient ruins of Machu Picchu in Peru.  To read more about the survey, please visit the section on Machu Picchu Project Details.

Several data sets from the Machu Picchu survey are available here as point clouds in the Polyworks PWK format (as IMInspect projects) or .obj polygonal mesh.

For the PWK formats we recommend using the free Polyworks | Viewer (previously the Polyworks IM Viewer), and for the obj files we recommend Rapidform Geomagic Verify Viewer (previously the Rapidform Explorer).

To open a PWZIP file – click on the .pwzip link below and save the file; if you are using Mozilla Firefox, right click on the link and save the file.  Open IMView, left click on “File” at the top of the screen.  Select “Open project”, then “Add Workspace” and browse to the .pwzip’s saved location.  Next, choose a local location to extract the files.  The files will unzip and the project will appear in IMView’s workspace file structure.  Left click on the project and open.

 

Knob Room (Conjunto 16, room 4)

Knob Room reduced (.zip)
(.obj polygonal mesh available 50% reduced resolution for faster download)

 

 

Please note. This data is distributed under a Creative Commons 3.0 License  (seehttp://creativecommons.org/licenses/by-nc/3.0/ for the full license). You are free to share and remix these data under the condition that you include attribution as provided here.  You may not use the data or products in a commercial purpose without additional approvals. Please attach the following credit to all data and products developed there from:

Credit:

Instituto Nacional de CulturaCenter for Advanced Spatial Technologies, (University of Arkansas) andCotsen Institute for Archaeology (UCLA)

Longer version:

Data developed under the authority of the Instituto Nacional de Cultura, Vladimir Dávila – Arquitecto del P.A.N Machu Picchu, Director del P.A.N Machu Picchu Direccion Regional de Cultura Cusco and Fernando Astete – National Archaeological Park of Machu Picchu. Data acquired, processed and distributed by the Center for Advanced Spatial Technologies staff (Snow Winters, Malcolm Williamson and Katie Simon) and by students in the 2009 Cotsen Institute for Archaeology  (UCLA) Cuzco/Machu Picchu Field School, Alexei Vranich Director.

Posted in Machu Picchu, Peru scanning, Peru, Scanning | Tagged , ,

Machu Picchu, Peru – Temple of the Condor

In 2005 and 2009, researchers from CAST used the Optech ILRIS-3D laser scanner to scan the ancient ruins of Machu Picchu in Peru.  To read more about the survey, please visit the section on Machu Picchu Project Details.

Several data sets from the Machu Picchu survey are available here as point clouds in the Polyworks PWK format (as IMInspect projects) or .obj polygonal mesh.

For the PWK formats we recommend using the free Polyworks | Viewer (previously the Polyworks IM Viewer), and for the obj files we recommend Rapidform Geomagic Verify Viewer (previously the Rapidform Explorer).

To open a PWZIP file – click on the .pwzip link below and save the file; if you are using Mozilla Firefox, right click on the link and save the file.  Open IMView, left click on “File” at the top of the screen.  Select “Open project”, then “Add Workspace” and browse to the .pwzip’s saved location.  Next, choose a local location to extract the files.  The files will unzip and the project will appear in IMView’s workspace file structure.  Left click on the project and open.

 

Temple of the Condor

Temple of the Condor reduced (.zip)
(.obj polygonal mesh available in 50% reduced resolution for faster download)

 

 

Please note. This data is distributed under a Creative Commons 3.0 License  (seehttp://creativecommons.org/licenses/by-nc/3.0/ for the full license). You are free to share and remix these data under the condition that you include attribution as provided here.  You may not use the data or products in a commercial purpose without additional approvals. Please attach the following credit to all data and products developed there from:

Credit:

Instituto Nacional de CulturaCenter for Advanced Spatial Technologies, (University of Arkansas) andCotsen Institute for Archaeology (UCLA)

Longer version:

Data developed under the authority of the Instituto Nacional de Cultura, Vladimir Dávila – Arquitecto del P.A.N Machu Picchu, Director del P.A.N Machu Picchu Direccion Regional de Cultura Cusco and Fernando Astete – National Archaeological Park of Machu Picchu. Data acquired, processed and distributed by the Center for Advanced Spatial Technologies staff (Snow Winters, Malcolm Williamson and Katie Simon) and by students in the 2009 Cotsen Institute for Archaeology  (UCLA) Cuzco/Machu Picchu Field School, Alexei Vranich Director.

Posted in Machu Picchu, Peru scanning, Peru | Tagged , ,

Machu Picchu, Peru – Main Temple

In 2005 and 2009, researchers from CAST used the Optech ILRIS-3D laser scanner to scan the ancient ruins of Machu Picchu in Peru.  To read more about the survey, please visit the section on Machu Picchu Project Details.

Several data sets from the Machu Picchu survey are available here as point clouds in the Polyworks PWK format (as IMInspect projects) or .obj polygonal mesh.

For the PWK formats we recommend using the free Polyworks | Viewer (previously the Polyworks IM Viewer), and for the obj files we recommend Rapidform Geomagic Verify Viewer (previously the Rapidform Explorer).

To open a PWZIP file – click on the .pwzip link below and save the file; if you are using Mozilla Firefox, right click on the link and save the file.  Open IMView, left click on “File” at the top of the screen.  Select “Open project”, then “Add Workspace” and browse to the .pwzip’s saved location.  Next, choose a local location to extract the files.  The files will unzip and the project will appear in IMView’s workspace file structure.  Left click on the project and open.

 

Principal (Main) Temple

Prinicipal (Main) Temple (.zip)
Prinicipal (Main) Temple reduced (.zip)

Collected 2009
(.obj polygonal mesh available in original resolution and 50% reduced resolution for faster download)

 

 

Please note. This data is distributed under a Creative Commons 3.0 License  (seehttp://creativecommons.org/licenses/by-nc/3.0/ for the full license). You are free to share and remix these data under the condition that you include attribution as provided here.  You may not use the data or products in a commercial purpose without additional approvals. Please attach the following credit to all data and products developed there from:

Credit:

Instituto Nacional de CulturaCenter for Advanced Spatial Technologies, (University of Arkansas) andCotsen Institute for Archaeology (UCLA)

Longer version:

Data developed under the authority of the Instituto Nacional de Cultura, Vladimir Dávila – Arquitecto del P.A.N Machu Picchu, Director del P.A.N Machu Picchu Direccion Regional de Cultura Cusco and Fernando Astete – National Archaeological Park of Machu Picchu. Data acquired, processed and distributed by the Center for Advanced Spatial Technologies staff (Snow Winters, Malcolm Williamson and Katie Simon) and by students in the 2009 Cotsen Institute for Archaeology  (UCLA) Cuzco/Machu Picchu Field School, Alexei Vranich Director.

Posted in Machu Picchu, Peru scanning, Peru | Tagged , ,

Machu Picchu, Peru – Intiwatana

In 2005 and 2009, researchers from CAST used the Optech ILRIS-3D laser scanner to scan the ancient ruins of Machu Picchu in Peru.  To read more about the survey, please visit the section on Machu Picchu Project Details.

Several data sets from the Machu Picchu survey are available here as point clouds in the Polyworks PWK format (as IMInspect projects) or .obj polygonal mesh.

For the PWK formats we recommend using the free Polyworks | Viewer (previously the Polyworks IM Viewer), and for the obj files we recommend Rapidform Geomagic Verify Viewer (previously the Rapidform Explorer).

To open a PWZIP file – click on the .pwzip link below and save the file; if you are using Mozilla Firefox, right click on the link and save the file.  Open IMView, left click on “File” at the top of the screen.  Select “Open project”, then “Add Workspace” and browse to the .pwzip’s saved location.  Next, choose a local location to extract the files.  The files will unzip and the project will appear in IMView’s workspace file structure.  Left click on the project and open.

 

Intiwantana

Intiwata (.zip)
Intiwata reduced (.zip)

Collected 2009
(.obj polygonal mesh available in original resolution and 50% reduced resolution for faster download)

 

 

Please note. This data is distributed under a Creative Commons 3.0 License  (seehttp://creativecommons.org/licenses/by-nc/3.0/ for the full license). You are free to share and remix these data under the condition that you include attribution as provided here.  You may not use the data or products in a commercial purpose without additional approvals. Please attach the following credit to all data and products developed there from:

Credit:

Instituto Nacional de CulturaCenter for Advanced Spatial Technologies, (University of Arkansas) andCotsen Institute for Archaeology (UCLA)

Longer version:

Data developed under the authority of the Instituto Nacional de Cultura, Vladimir Dávila – Arquitecto del P.A.N Machu Picchu, Director del P.A.N Machu Picchu Direccion Regional de Cultura Cusco and Fernando Astete – National Archaeological Park of Machu Picchu. Data acquired, processed and distributed by the Center for Advanced Spatial Technologies staff (Snow Winters, Malcolm Williamson and Katie Simon) and by students in the 2009 Cotsen Institute for Archaeology  (UCLA) Cuzco/Machu Picchu Field School, Alexei Vranich Director.

Posted in Machu Picchu, Peru scanning, Peru | Tagged , ,

Machu Picchu, Peru – Temple of the Sun

In 2005 and 2009, researchers from CAST used the Optech ILRIS-3D laser scanner to scan the ancient ruins of Machu Picchu in Peru.  To read more about the survey, please visit the section on Machu Picchu Project Details.

Several data sets from the Machu Picchu survey are available here as point clouds in the Polyworks PWK format (as IMInspect projects) or .obj polygonal mesh.

For the PWK formats we recommend using the free Polyworks | Viewer (previously the Polyworks IM Viewer), and for the obj files we recommend Rapidform Geomagic Verify Viewer (previously the Rapidform Explorer).

To open a PWZIP file – click on the .pwzip link below and save the file; if you are using Mozilla Firefox, right click on the link and save the file.  Open IMView, left click on “File” at the top of the screen.  Select “Open project”, then “Add Workspace” and browse to the .pwzip’s saved location.  Next, choose a local location to extract the files.  The files will unzip and the project will appear in IMView’s workspace file structure.  Left click on the project and open.

 

Temple of the Sun

Temple of the Sun (.zip)
Temple of the Sun reduced (.zip)
Collected 2009
(.obj polygonal mesh available in original resolution and 50% reduced resolution for faster download)

 

 

Please note. This data is distributed under a Creative Commons 3.0 License  (seehttp://creativecommons.org/licenses/by-nc/3.0/ for the full license). You are free to share and remix these data under the condition that you include attribution as provided here.  You may not use the data or products in a commercial purpose without additional approvals. Please attach the following credit to all data and products developed there from:

Credit:

Instituto Nacional de CulturaCenter for Advanced Spatial Technologies, (University of Arkansas) andCotsen Institute for Archaeology (UCLA)

Longer version:

Data developed under the authority of the Instituto Nacional de Cultura, Vladimir Dávila – Arquitecto del P.A.N Machu Picchu, Director del P.A.N Machu Picchu Direccion Regional de Cultura Cusco and Fernando Astete – National Archaeological Park of Machu Picchu. Data acquired, processed and distributed by the Center for Advanced Spatial Technologies staff (Snow Winters, Malcolm Williamson and Katie Simon) and by students in the 2009 Cotsen Institute for Archaeology  (UCLA) Cuzco/Machu Picchu Field School, Alexei Vranich Director.

Posted in Machu Picchu, Peru scanning, Peru | Tagged , ,

Machu Picchu, Peru – Entire Site

In 2005 and 2009, researchers from CAST used the Optech ILRIS-3D laser scanner to scan the ancient ruins of Machu Picchu in Peru.  To read more about the survey, please visit the section on Machu Picchu Project Details.

Several data sets from the Machu Picchu survey are available here as point clouds in the Polyworks PWK format (as IMInspect projects) or .obj polygonal mesh.

For the PWK formats we recommend using the free Polyworks | Viewer (previously the Polyworks IM Viewer), and for the obj files we recommend Rapidform Geomagic Verify Viewer (previously the Rapidform Explorer).

To open a PWZIP file – click on the .pwzip link below and save the file; if you are using Mozilla Firefox, right click on the link and save the file.  Open IMView, left click on “File” at the top of the screen.  Select “Open project”, then “Add Workspace” and browse to the .pwzip’s saved location.  Next, choose a local location to extract the files.  The files will unzip and the project will appear in IMView’s workspace file structure.  Left click on the project and open.

 

Top-down image of Machu Picchu 3D Data Set

MP_entiresite.pwzip (228 mb)
Collected 2005
MP_entiresite.pwk represents the complete Machu Picchu data set consisting of 8 merged scans. The data were acquired over a period of four nights at 3 – 20 cm resolution resulting in a collection of nearly 30 million data points. For viewing purposes, the data have been reduced here to 8 million points.   This particular data set really gives the user a feel of what it is like to“be” at Machu Picchu.

 

 

Please note. This data is distributed under a Creative Commons 3.0 License  (seehttp://creativecommons.org/licenses/by-nc/3.0/ for the full license). You are free to share and remix these data under the condition that you include attribution as provided here.  You may not use the data or products in a commercial purpose without additional approvals. Please attach the following credit to all data and products developed there from:

Credit:

Instituto Nacional de CulturaCenter for Advanced Spatial Technologies, (University of Arkansas) andCotsen Institute for Archaeology (UCLA)

Longer version:

Data developed under the authority of the Instituto Nacional de Cultura, Vladimir Dávila – Arquitecto del P.A.N Machu Picchu, Director del P.A.N Machu Picchu Direccion Regional de Cultura Cusco and Fernando Astete – National Archaeological Park of Machu Picchu. Data acquired, processed and distributed by the Center for Advanced Spatial Technologies staff (Snow Winters, Malcolm Williamson and Katie Simon) and by students in the 2009 Cotsen Institute for Archaeology  (UCLA) Cuzco/Machu Picchu Field School, Alexei Vranich Director.

Posted in Machu Picchu, Peru scanning, Peru | Tagged , ,

Machu Picchu, Peru – Huayna Picchu

In 2005 and 2009, researchers from CAST used the Optech ILRIS-3D laser scanner to scan the ancient ruins of Machu Picchu in Peru.  To read more about the survey, please visit the section on Machu Picchu Project Details.

Several data sets from the Machu Picchu survey are available here as point clouds in the Polyworks PWK format (as IMInspect projects) or .obj polygonal mesh.

For the PWK formats we recommend using the free Polyworks | Viewer (previously the Polyworks IM Viewer), and for the obj files we recommend Rapidform Geomagic Verify Viewer (previously the Rapidform Explorer).

To open a PWZIP file – click on the .pwzip link below and save the file; if you are using Mozilla Firefox, right click on the link and save the file.  Open IMView, left click on “File” at the top of the screen.  Select “Open project”, then “Add Workspace” and browse to the .pwzip’s saved location.  Next, choose a local location to extract the files.  The files will unzip and the project will appear in IMView’s workspace file structure.  Left click on the project and open.

 

Image of Huayna Picchu 3D Data and Huayna Picchu Peak

MP_huayna_picchu.pwzip (117 mb)
Collected 2005
Huayna Picchu is the large peak directly to the north of Machu Picchu  (commonly seen in most photos of Machu Picchu). MP_huayna_picchu.pwzip consists of a single scan of the ruins atop of Huayna Picchu.  This scan was acquired from a distance of over 500 meters away and took approximately two hours to complete.  The scan was acquired at a 3 cm resolution and contains 5 million data points.

 

 

Please note. This data is distributed under a Creative Commons 3.0 License  (seehttp://creativecommons.org/licenses/by-nc/3.0/ for the full license). You are free to share and remix these data under the condition that you include attribution as provided here.  You may not use the data or products in a commercial purpose without additional approvals. Please attach the following credit to all data and products developed there from:

Credit:

Instituto Nacional de CulturaCenter for Advanced Spatial Technologies, (University of Arkansas) andCotsen Institute for Archaeology (UCLA)

Longer version:

Data developed under the authority of the Instituto Nacional de Cultura, Vladimir Dávila – Arquitecto del P.A.N Machu Picchu, Director del P.A.N Machu Picchu Direccion Regional de Cultura Cusco and Fernando Astete – National Archaeological Park of Machu Picchu. Data acquired, processed and distributed by the Center for Advanced Spatial Technologies staff (Snow Winters, Malcolm Williamson and Katie Simon) and by students in the 2009 Cotsen Institute for Archaeology  (UCLA) Cuzco/Machu Picchu Field School, Alexei Vranich Director.

Posted in Machu Picchu, Peru scanning, Peru | Tagged , ,

Machu Picchu, Peru – Central area

In 2005 and 2009, researchers from CAST used the Optech ILRIS-3D laser scanner to scan the ancient ruins of Machu Picchu in Peru.  To read more about the survey, please visit the section on Machu Picchu Project Details.

Several data sets from the Machu Picchu survey are available here as point clouds in the Polyworks PWK format (as IMInspect projects) or .obj polygonal mesh.

For the PWK formats we recommend using the free Polyworks | Viewer (previously the Polyworks IM Viewer), and for the obj files we recommend Rapidform Geomagic Verify Viewer (previously the Rapidform Explorer).

To open a PWZIP file – click on the .pwzip link below and save the file; if you are using Mozilla Firefox, right click on the link and save the file.  Open IMView, left click on “File” at the top of the screen.  Select “Open project”, then “Add Workspace” and browse to the .pwzip’s saved location.  Next, choose a local location to extract the files.  The files will unzip and the project will appear in IMView’s workspace file structure.  Left click on the project and open.

Iamge of 3D Data Machu Picchu - Closeup

MP_closeup_reduced.pwzip (73 mb)
Collected 2005
MP_closeup_reduced.pwk is our “enticer” data set for Machu Picchu and has been reduced for *relatively quick download and viewing.  The data have been subset from a single scan and reduced by half (from an original data resolution of 3 centimenters).  The scan area covers the central portion of the site featuring several structures and a small “plaza” area. In the scan, you can see the beautiful details of the Incan masonry that was acquired with the laser scanner.  This data set contains slightly less than one million data points.

 

Please note. This data is distributed under a Creative Commons 3.0 License  (seehttp://creativecommons.org/licenses/by-nc/3.0/ for the full license). You are free to share and remix these data under the condition that you include attribution as provided here.  You may not use the data or products in a commercial purpose without additional approvals. Please attach the following credit to all data and products developed there from:

Credit:

Instituto Nacional de CulturaCenter for Advanced Spatial Technologies, (University of Arkansas) andCotsen Institute for Archaeology (UCLA)

Longer version:

Data developed under the authority of the Instituto Nacional de Cultura, Vladimir Dávila – Arquitecto del P.A.N Machu Picchu, Director del P.A.N Machu Picchu Direccion Regional de Cultura Cusco and Fernando Astete – National Archaeological Park of Machu Picchu. Data acquired, processed and distributed by the Center for Advanced Spatial Technologies staff (Snow Winters, Malcolm Williamson and Katie Simon) and by students in the 2009 Cotsen Institute for Archaeology  (UCLA) Cuzco/Machu Picchu Field School, Alexei Vranich Director.

Posted in Machu Picchu, Peru scanning, Peru | Tagged , ,

University of Arkansas, Fayetteville Heating and Chilling Plant – Heating Interior


Merged scans of the Facilities Management Heating and Chilling Plant buildings were collected with the Leica C10 laser scanner.  The scans include multiple floors within the building interiors as well as the building exteriors.  Interior scans were collected with a point spacing of approximately 2 cm at the most dense (at a range of < 2 meters) to approximately 120 cm at the least dense (at a range of 35 meters).  Exterior scans were collected with a point spacing of approximately 30 cm.  The data sets have been separated due to file size and data density.

 

FAMA_Heating_Interior.zip (473 mb) (available at 2.5cm spacing in .xyz ascii file format)

Sitemap_Heating_Interior.htm – Explore the data set in Leica TruView, which requires Leica TruView free viewer and Internet Explorer.  For instructions on using the free TruView data viewer and for a complete list of links to the TruView data related to this project, please see: Accessing Heating & Chilling Plant TruViews.

Please note. This data is distributed under a Creative Commons 3.0 License  (seehttp://creativecommons.org/licenses/by-nc/3.0/ for the full license). You are free to share and remix these data under the condition that you include attribution as provided here.  You may not use the data or products in a commercial purpose without additional approvals. Please attach the following credit to all data and products developed there from:

Credits:

Data was collected in collaboration with University of Arkansas Facilities Management, Operations and Maintenance and Campus Planning Divisions.

Posted in United States, University of Arkansas, Fayetteville scanning | Tagged ,

University of Arkansas, Fayetteville Heating and Chilling Plant – Chilling Interior


Merged scans of the Facilities Management Heating and Chilling Plant buildings were collected with the Leica C10 laser scanner.  The scans include multiple floors within the building interiors as well as the building exteriors.  Interior scans were collected with a point spacing of approximately 2 cm at the most dense (at a range of < 2 meters) to approximately 120 cm at the least dense (at a range of 35 meters).  Exterior scans were collected with a point spacing of approximately 30 cm.  The data sets have been separated due to file size and data density.

FAMA_Chilling_Interior.zip (631 mb)

FAMA_Chilling_Reduced.zip (166 mb) (available in original resolution and 50% reduced resolution for faster download in .xyz ascii file format)

Sitemap_Chilling_Main_Floor.htm Sitemap_Chilling_Basement.htm Explore the data set in Leica TruView, which requires Leica TruView free viewer and Internet Explorer.  For a complete list of links to the Leica TruView data related to this project, and for instructions on using TruView, please see: Accessing Heating & Chilling Plant TruViews.

Please note. This data is distributed under a Creative Commons 3.0 License  (seehttp://creativecommons.org/licenses/by-nc/3.0/ for the full license). You are free to share and remix these data under the condition that you include attribution as provided here.  You may not use the data or products in a commercial purpose without additional approvals. Please attach the following credit to all data and products developed there from:

Credits:

Data was collected in collaboration with University of Arkansas Facilities Management, Operations and Maintenance and Campus Planning Divisions.

 

Posted in United States, University of Arkansas, Fayetteville scanning | Tagged ,

University of Arkansas, Fayetteville Heating and Chilling Plant – Interiors Combined

 

Merged scans of the Facilities Management Heating and Chilling Plant buildings were collected with the Leica C10 laser scanner.  The scans include multiple floors within the building interiors as well as the building exteriors.  Interior scans were collected with a point spacing of approximately 2 cm at the most dense (at a range of < 2 meters) to approximately 120 cm at the least dense (at a range of 35 meters).  Exterior scans were collected with a point spacing of approximately 30 cm.  The data sets have been separated due to file size and data density.

Fama_Interiors_Combined.zip (826 mb) (in .xyz ascii file format)

Explore the data set in Leica TruView, which requires Leica TruView free viewer and Internet Explorer.  For instructions on using the free TruView data viewer and for a complete list of links to the TruView data related to this project, please see: Accessing Heating & Chilling Plant TruViews.

Please note. This data is distributed under a Creative Commons 3.0 License  (seehttp://creativecommons.org/licenses/by-nc/3.0/ for the full license). You are free to share and remix these data under the condition that you include attribution as provided here.  You may not use the data or products in a commercial purpose without additional approvals. Please attach the following credit to all data and products developed there from:

Credits:

Data was collected in collaboration with University of Arkansas Facilities Management, Operations and Maintenance and Campus Planning Divisions.

Posted in United States, University of Arkansas, Fayetteville scanning | Tagged ,

University of Arkansas, Fayetteville Heating and Chilling Plant – Exterior

 

Merged scans of the Facilities Management Heating and Chilling Plant buildings were collected with the Leica C10 laser scanner.  The scans include multiple floors within the building interiors as well as the building exteriors.  Interior scans were collected with a point spacing of approximately 2 cm at the most dense (at a range of < 2 meters) to approximately 120 cm at the least dense (at a range of 35 meters).  Exterior scans were collected with a point spacing of approximately 30 cm.  The data sets have been separated due to file size and data density.

Fama_Exterior.zip (956 mb) (in .xyz ascii file format)

Sitemap_Exterior.htm – Explore the data set in Leica TruView, which requires Leica TruView free viewer and Internet Explorer.  For instructions on using the free TruView data viewer and for a complete list of links to the TruView data related to this project, please see: Accessing Heating & Chilling Plant TruViews.

Please note. This data is distributed under a Creative Commons 3.0 License  (seehttp://creativecommons.org/licenses/by-nc/3.0/ for the full license). You are free to share and remix these data under the condition that you include attribution as provided here.  You may not use the data or products in a commercial purpose without additional approvals. Please attach the following credit to all data and products developed there from:

Credits:

Data was collected in collaboration with University of Arkansas Facilities Management, Operations and Maintenance and Campus Planning Divisions

 

Posted in United States, University of Arkansas, Fayetteville scanning | Tagged ,

University of Arkansas, Fayetteville – Plaza

 

Data collected as part of the 2010 NSF Sponsored 3D Recording and Visualization Training Program at the University of Arkansas for High School and College Students.

Merged scans collected with the Leica C10 laser scanner and the Z+F Imager 5006i laser scanner of the J.B. Hunt Transport Services Inc. Center for Academic Excellence, Sam M. Walton College of Business, Willard J. Walker Hall, Kimpel Hall and Donald W. Reynolds Center For Enterprise Development buildings.

Merged scans of the study area

all_plazaUA.zip (895 mb)
all_plazaUA_reduced.zip (59 mb)
(available in original resolution and 50% reduced resolution for faster download in .xyz ascii file format)

SiteMap.htm
Explore the data set in Leica TruView. (Requires Leica TruView free viewer and Internet Explorer.)

Please note. This data is distributed under a Creative Commons 3.0 License  (seehttp://creativecommons.org/licenses/by-nc/3.0/ for the full license). You are free to share and remix these data under the condition that you include attribution as provided here.  You may not use the data or products in a commercial purpose without additional approvals. Please attach credit to all data and products developed there.

Posted in Scanning, United States, University of Arkansas, Fayetteville scanning | Tagged ,

AutoCAD 2009 – Scaling and Adjusting the Coordinate System of AutoCAD Objects

This workflow is demonstrated on AutoCAD drawings for the University of Arkansas Mapping and Room Use Study in the Fall of 2010.  In this case, the reference drawing is a globally accurate campus map (in engineering units).  Detailed, floor plans of individual buildings (in architectural units with no global coordinate system) are scaled to be placed in the campus map for  eventual use in GIS applications.

In updated versions of ACAD, drawings are scaled automatically when they’re inserted into a drawing with a different scale. However, in older versions and depending on how drawings were created, automatic scaling does not always work. Sometimes, objects within a DWG need to be scaled manually to a known dimension, another object, or an image. This example covers manually scaling and aligning ACAD object(s) by using another object as a reference. Please see AutoCAD Interface and File StructureBasics’ for basic navigation and file structure information.

I. Aligning & Scaling an ACAD object using another ACAD object as a reference –

A. Open the reference drawing (ie: campus map) as read only > File > Open > Select file name > DO NOT click ‘Open’, use the pull-down arrow to select ‘Open Read-only’ > Locate the object (ie: individual building) that you are scaling

NOTE: Opening the reference DWG read-only allows you to use it freely as a reference without affecting the original drawing. It protects against accidently overriding the original source file.

B. Open the drawing with the objects to be adjusted (ie: individual floor plan) > It is recommended to ‘Open Read-only’ on this drawing as well >Adjust the layers so that only those layers that you want to use are visible > Confirm all visible lines are closed polylines

clip_image002[6]

Figure 1 – Layers Manager Toolbar

C. Select the object(s) > RC > Clipboard > Copy with base point > LC on a significant point on object(s) to specify the base point (See “Copying & Pasting Objects” in AutoCAD Basics for more information) > Using the lower, left corner consistently is recommended for ease in remembering and placing objects

clip_image004[6]

Figure 2 – Objects are selected and copied

clip_image006[6]

Figure 3 – Magenta box highlights the base point as it is defined by the user

D. Paste object(s) into reference drawing (ie: paste individual floor plan into campus plan) > Zoom into location for placement (reference object) > RC > Clipboard > Paste as block > LC at point that you want to place the base point defined in Step C

E. Align copied block object to correspond with reference object

1. Select block object

2. Command line: ALIGN

3. Prompt to specify the first source point – Select the original base point from previous steps

4. Prompt to specify the first destination point – Select the same original base point from previous steps

5. Specify second source point – select a significant point on the reference object

6. Specify second destination point – select the point on the block object which you want to correspond with the source point in Step 5

7. Click ENTER twice

clip_image008[6]

Figure 4 – The small blue polygon represents the reference object; the magenta and yellow polygons represent the copied block
 
 
 

clip_image010[6]

Figure 5 – The base point (here, the lower left corner on both the reference and the copied objects) is selected twice as both the source and the destination point

 

clip_image012[6]

Figure 6 – A significant point on the block object is selected for the 2nd source point – here, another corner is used
 
 

clip_image014[6]

Figure 7 – The same significant point on the reference object is selected for the 2nd destination point – NOTE: the exact point does not need to be selected on the reference object but it must be accurately selected along the axis or line to which you want to align
 
 

clip_image016[6]

Figure 8 – The copied block (the magenta and yellow polygons) is now aligned to the reference object (the small blue polygon)
 
 
 

clip_image018[6]

Figure 9 – The same length of wall (highlighted with the magenta boxes) will be used to scale the block object (yellow) to the reference object (blue) – NOTE: To scale to a reference object, the objects must be aligned as outlined in the previous step; the objects have been separated here for explanation only.
 
 
 

F. Scale the block object to the scale of the reference object – we will use a significant feature on the reference object (in this case the length of a wall) to scale the block object. A significant feature has an easily identifiable start point and end point, which we will use to re-define the length of the block object. NOTE: ACAD uses the term ‘reference length’ in the command prompt- this is only applicable when the command is active and should not be confused with the reference object/drawing we have been using:

1. Select block object

2. Command line: SC or SCALE

3. Specify base point > LC to select original base point from previous steps (see Figure 10 – Selection 1)

4. Specify scale factor > command line: R or REFERENCE

5. Specify reference length > This is the significant length (wall) on the object being scaled and must be defined by selecting the start point and the end point

a) Select the start point – here it is the base point from previous steps (see Figure 10 – Selection 2 is the same point as Selection 1)

b) Specify the 2nd point/endpoint of the significant length (see Figure 10 – Selection 3)

6. Specify new length > This is the end point on the reference length, which is defined by the new endpoint only (see Figure 10 – Selection 4)

7. The block object (individual building) and reference object (campus map building) should now have the same scale and orientation

TIPS: If object is not scaling correctly, confirm that automatic settings within ACAD are not active and interfering. See the toolbar at bottom of screen (where OSNAPS icon is located) – confirm that the following settings are OFF Infer Constraints

Polar Tracking

Object Snap

Tracking

Dynamic Input

Clicking on the icon toggles it on/off, which is noted in the command line – click on each icon listed and confirm it is off

clip_image020[6]

Figure 10 – Scaling the block object using a reference length – Selection 1 (base point) and Selection 2 (the start point of reference length) are the same point; Selection 3 is the endpoint of reference length (ie: the length being scaled); Selection 4 is the desired new endpoint of the length being scaled
 
 

II. Adjusting the Coordinate System and Saving as an Individual Drawing

In the previous steps, the block object (individual building) was aligned and scaled to match the reference drawing (campus map). When the block was placed in the campus map, it was placed into the map’s coordinate system. By using the drawing origin (point 0, 0, 0 which is coincidental in all drawings) we can now create a new drawing with the same coordinates as the campus map so that scaled, aligned, and properly coordinated buildings can be saved individually.

A. Create a new drawing > Adjust the units to match the reference drawing (here campus map drawing is in Architectural Units > 1/16″ Precision) > See ACAD Basics for more on Units

B. In the Reference Drawing (campus map) > RC > Clipboard > Copy the block object (individual floor plan) with base point > Base Point 0,0,0

B. In the new drawing created in Step A > RC > Clipboard > Paste to Original Coordinates > Command Line: ZE (Zoom Extents)

C. The floor plan is now scaled, aligned, and in the proper coordinate system with the Campus Map > Select block object > Command Line: EXPLODE > The floor plan is no longer grouped but is again composed of individual polylines

D. Command Line: Save As > Under files of type, use rolldown to find AutoCAD 2007 .dwg > Save and close

Posted in Uncategorized, Workflow | Tagged , , , , , , ,

On-the-fly and Permanent Orthorectification of NITF Images in ArcGIS

About NITF files

The NITF (National Imagery Transmission Format Standard) format is commonly used for georeferenced imagery and has the ability to store a large amount of ancillary data and metadata along with the image itself, all in one file. One of the more useful pieces of ancillary data optionally stored within an NITF file is known as RPCs (Rational Polynomial Coefficients). With the use of RPCs and elevation data for the area, complete orthorectification is possible. For example data, see CAST’s “Corona Atlas of the Middle East” and choose from the large number of NITF files available for download.

Viewing and creating NITF orthoimage in ArcGIS

Written for ArcGIS 10

1. Open ArcMap and add one or more NITF image files.

2. Add a DEM covering the area.

3. Right-click the image name in the Table of Contents and choose Properties.

image

4. Under the Display tab in the Layer Properties dialog, make the following three changes in the “Orthorectification” box and click OK:

1. Check “Orthorectification using elevation”.

2. Click radio button for “DEM” and use the drop-down menu to choose the appropriate DEM.

3. Make sure “Geoid” is unchecked.

These Layer Properties will need to be set for each NITF file you add to ArcMap.

image

5. Each NITF image with the settings from step 4 above should now appear orthorectified. To make this permanent, they will need to be exported to create a new file and format (ArcMap will not export to NITF). To do this, there are two options:

1. Use the “Export Data” dialog found by right-clicking the NITF file’s name in the Table of Contents, holding the mouse over “Data” in the context menu, and clicking “Export Data”. Options for extent, spatial reference, cell size, file location, name and format are available in this dialog. Choose the desired settings and export.

2. Use the “Create Orthorecitified Dataset” tool in the ArcToolbox. See the ArcGIS help for help with this tool.

As another option, see the free “NITF for ArcGIS” extension for Desktop and Server available for download here: http://www.esri.com/industries/defense/nitf

Posted in Workflow | Tagged , , ,

Guidelines for preparing workflows/documents for GMV – Private

Options for posting to GMV

1. Creating and publishing a blogpost directly out of Word

2. Publishing from Windows Live Writer (recommended)

Guidelines for creating your workflows in Word

1. Make all images “In line with text”

2. Make all image/table captions text. Do not use text boxes or MS Word captions – these are translated as images when published to GMV.

3. Group side-by-side images (as shown below).

clip_image002

Figure 1: Sample group image and also sample figure text


Sometimes this can be difficult to do in Word . So select an image and copy – open Photoshop – Create New – Increase your canvas width roughly 2 x’s. Paste the first image then copy and paste the second image out of Word. Select both images in PS and group them (Ctrl + G). Select the image group and copy and paste back into Word.

4. Some text formatting translates across to LiveWriter but not all. For example using preset formatting such as Heading 1 – Heading 3 will retain the size of the font but not the blue color. Also, the reduced font size above for Figure 1 is not retained. So you can format your text a bit in Word but be aware that you may have to reformat in LiveWriter. See screen shot below of copy and paste from THIS document into LiveWriter.

clip_image004

Setting up LiveWriter
LiveWriter is part of Windows Live Essentials. It’s free do download and install for windows machines. When installing Live essentials, the default is to install not only LiveWriter but also a slew of other software. These can be unchecked and not installed if you choose to.

1. Once you have LiveWriter installed, you’ll need to set it up to publish to the GMV. Select – Add Blog Account from the main toolbar. For blog service – choose WordPress. Next enter https://gmv.cast.uark.edu/ for the blog web address and then enter your username and password for GMV (contact Snow if you need a username/password). Hit finish and you should now be connected.

2. By default, LiveWriter resizes images and makes them clickable to see the larger image version. I personally like my images to remain the same size as they are in the original word document. To format images in LiveWriter, first copy and paste an image from a Word doc into LiveWriter and then double click the image to access the Picture Tools. In Picture Tools, change the following 3 options:

a. Change the Picture Size from Medium to Original

b. Change Link to: property from Source Picture to None.

c. Next select Set to default to apply these settings to all incoming images.

clip_image006

You are now ready to copy and paste a workflow from a Word doc. Be aware that not everything will translate beautifully from Word to LiveWriter and from LiveWriter to GMV. Sooo…

1. Create your workflow in Word using the guidelines suggested above

2. Copy and Paste your workflow from Word to LiveWriter – adjust formatting as necessary

3. Publish your workflow from LiveWriter to GMV

4. View post in GMV and adjust formatting as necessary

5. Once you get everything formatted in GMV, download the article as a PDF and see how it looks

clip_image008

Known Oddities, Wierdnesses, and Additional suggestions

1. Important: When in doubt Use Enter as opposed to Shift+Enter (particularly for placing image captions below an image). This may not look the best in the actual blog post but will improve the appearance of the PDF.

2. If you publish directly out of Word (which has not been covered in this document), a list numbered 1-x will often be restarted following the insertion of an image. If you decide to publish out of Word, you will spend a lot of time directly editing the HTML version of a post to fix this. Check out the Laser Control Workflow for Color Mapping post to see an example of this J

3. Tables are a little iffy. They typically work better when published directly out of Word. For the scan metadata forms I either published them out of word or converted the tables to images first. If anyone finds a better solution, please add it here.

Posted in Admin Articles

AutoCAD 2009 – Interface and File Structure Basics

This workflow is intended for users with minimal AutoCAD experience who want to learn the basic navigation and file logic.  As with any modeling program, there are many ways to perform each function and every user has different methods/shortcuts to reach the same goals.  If you understand the concept and the priorities of the step, use the tools to your best advantage and find your own methods for completing tasks.

See AutoCAD’s help menu and the many free tutorials on-line to help with questions and steps. ACAD’s user interface is fairly friendly. Often just typing the command, all as one word without spaces, will either bring up the command or the appropriate help topic. Each new version of ACAD has changes in the interface but you can always access primary commands in this command line.  Those new to the 2009 version of CAD may find the New Features Tutorial helpful.

Creating/Opening Files

To Open an Existing File -> In ACAD -> Start Menu -> Open (see figure 1)

  • When opening files from different versions of ACAD or containing different elements (such as an user-specific fonts or styles) a warning will appear when you open the file.  If dialogue box asks for shape file (.shp) or states it cannot locate the shape file, close the box by clicking ‘X’ and the file will open, replacing the unknown file with a default replacement file.
  • If a message appears regarding updating AEC elements, be aware that older versions of AutoCAD cannot read files saved in newer versions; in order to open/manipulate a new file in an old version of CAD –> Open the file in the NEW version of ACAD –> File –> Save As –> File Type –> pull down to select the appropriate version of CAD.  See the information on file types .dwg versus .dxf in the overview ‘File Formats’.

To Create a New .DWG (ACAD drawing) -> Start Menu -> New -> Drawing -> Template Library opens -> to work with point clouds and other 3D work spaces choose “acad3D.dwt” -> template opens a 3D model space as a .DWG with a default file name -> adjust preferences as needed (see below) -> save file NOTE: The .dwt name varies w/ version of ACAD – the goal is choosing the 3D template)

clip_image002

Figure 1 ACAD’s start menu provides access to basic commands and types of modeling workspaces – user can move between Classic, 3D, and 2D work spaces or customize a new one

File Formats

DWG – this is ACAD’s standard file format.  The type of .DWG format varies with each different release/version of ACAD in which the file was saved (by year).

DXF – or “Drawing eXchange Format” is a format that is standardized in different CAD and graphics programs.   This allows users to exchange drawings even if they don’t have the same program. When you use the DXF format, some objects may change their appearance when re-opened.  See DXFOUT and DXFIN commands for exporting and importing DXF files, respectively.

NOTE: 2004.dxf is a highly supported/interchangeable format and will often allow exchange with other software which may not successfully allow the .DWG.  To save an ACAD file as a .DXF -> file -> save as -> pull-down file type to .dxf

BAK – this is ACAD’s backup file format; ACAD automatically creates a duplicate backup file.  If you original file becomes corrupt or un-usable, rename the BAK file to a DWG file and open as usual.

SV$ –  This is AutoCAD’s format it uses whenever it performs an automatic save. AutoCAD will save the file automatically within a pre-determined time frame. Set the time frame and the location of automatic saves in Options > Files dialog box.

Navigation

There are several methods to navigate in ACAD

Command Line – simple commands such as PAN, ORBIT, ZOOM can be entered in the command line.

ZOOM – Use the zoom icon or the middle roller on your mouse to zoom in/out; also type ‘ZOOM’ in command line for zooming options. Type shortcut ‘ZE’ to zoom to the extents of all visible objects/points

Toolbars – Right click on the area where any visible toolbar is docked; this brings up your toolbar options.  ACAD contains the most basic toolbars. Activating the toolbars for ‘Orbit’,  ‘3D Navigation’, and ‘Views’ is recommended.

NOTE: When orbiting, a sphere is displayed with nodes shown on the top, bottom, left, and right – the cursor must be within this sphere (even if the objects/points are outside of it) in order to rotate/orbit; placing your cursor over the nodes helps control orbit; the objects and points are not visible during orbit but re-appear when the command ends

Mouse – by default the mouse is set to zoom in/out; RC also displays navigation options

Preferences/ Setup

UNITS – one of the most important preference to consider is units -> Command line -> UNITS -> Insertion Scale -> Set unit (maintaining the same unit as the scan data, which is usually meters, is recommended)

NOTE: If point cloud data and CAD need to be in different units or coordinate systems, use ACAD’s point cloud dialogue box to define units and coordinates as they will be used in ACAD -> Command Line -> POINTCLOUD -> Locate data -> Define units/coordinates -> data is converted to the scale/coordinates of the ACAD drawing

Other Preferences -> Command line –> ‘PREFERENCES’

Files –> displays and allows you to edit paths for file saves (including auto saves), backups, etc

Display –> Screen & element colors, crosshair size, & display performance

Open & Save –> Automatic Save setting options (set to 10 minutes between automatic backup/autosave by default; this can be an invaluable backup or it can be a frustrating time consumer – adjust this setting as needed based on file size and backup issues)

Explore other preferences and file options

Command Lines & Help Menu – Tips

Entering basic commands in their entirety or abbreviated into the command line is often successful and quick — ZOOM (Z), PAN (P), ORBIT, LINE (L), UNDO (U), SAVEAS; You can scroll through the command history with the up arrow in the command line or by entering ‘F2’

Snap Icons – Directly below the command line, Snap icons allow quick toggling of drawing constraints used during modeling

clip_image004

Figure 2 Active command line is highlighted in yellow; the user can scroll through the command history located directly above the active line; Snap icons are located directly below the command prompt; Also note that the Model and Layout tabs at the top allow user to move between model space and paper space

 

Help – ACAD has strong help/search capabilities so when in question, a good place to start is the help menu accessed through -> command prompt –> HELP or File –> Help or F1 Key or Typing keyword or phrase in search box located in upper right hand side of screen

Model Space, Paper Space, & Viewports

These are the names for the virtual spaces within ACAD.  They are controlled through the layout tabs

clip_image006

Figure 3 Layers Toolbar – LC on icon to open layer manager

 

Model Space = infinite 3D modeling space at 1:1 scale

**It is extremely important that from the very first step, you utilize the coordinate system in CAD. CAD is made up of infinite space and in order to keep your project organized, understanding this infinite space is essential. By default, every ModelSpace has the same origin point located at 0,0,0 (meaning the x, y, z coordinates are all at 0 – enter this value into the command line with no spaces). Recognizing and using the origin maintains a point of reference among various drawings. See the ‘Copying & Pasting Objects’ section below.

Layout/Paper Space = Virtual sheet of paper containing windows (or “viewports”) into model space.  Layout/Paper space can be scaled, labeled, and manipulated without affecting the objects in model space.  Layout is the preferred space from which to print/plot.

  • Each layout is an individual “sheet of paper” containing an infinite number of viewports, labels, and text
  • elements & layers can be viewed independently between model & paper space by adjusting their visibility in the Layers Manager

Viewports = a 2d view used to project the 3d model space into the layout view.  It is similar to a ‘window’ that you place in paper space to look into Model Space.

Osnaps / Constraints

Osnaps and constraints are essential to utilizing the full accuracy and proficiency of ACAD.  Icons to toggle them on/off are located along the bottom of the screen, below the command line (see figure 2)

Osnaps – when ON, they allow the user to “snap” drawing, modeling, and measurement tools to specific points on objects and point clouds (these points such as midpoint or end point appear when the cursor hovers above the location on the object).  When OFF, these grips are not highlighted while drawing.

Infer Constraints – infers where objects, lines, or references would continue into space, allowing the user to snap to intersections/points that might not literally exist

Ortho Mode – restricts the user to orthogonal modeling/drawing

Polar Tracking – User can set a degree as a guide for modeling/drawing

Layers

Layers are the primary means of controlling objects in ACAD; organized, clearly-named layers are essential to managing the drawing.

Note if importing data from Leica Cyclone: When a point cloud is loaded into CloudWorx, Cyclone’s layers are transferred into ACAD layers (Cyclone names transfer preceded by “~”).  Unless you have changed layer names in Cyclone, the point cloud is on “~Default” upon import.

Create new layer(s) for all modeling work through the Layer Property Manager To create/manage layers -> RC on any ACAD toolbar -> LC Layers -> On Layers toolbar, LC Layer Property Manager icon (see Figure 4)

 

clip_image008

Figure 4 Layer Property Manager – Create and modify layers here. Layers are the primary means of controlling objects in ACAD and organized, clearly-named layers are essential to managing the drawing.

Selecting Objects

Lines, blocks, solids, and other objects can be selected individually or in groups/sets.

Select Single Object > LC directly on the object.  It’s grips will highlight showing it has been selected > Use Shift + LC to select multiple grips

One or more objects > LC + move with the mouse to draw a selection window

  • If the selection window is drawn from left-to-right, only objects that are COMPLETELY ENCLOSED BY THE WINDOW are selected
  • If the window is drawn from right-to-left, every object that TOUCHES THE WINDOW is selected

De-select > LC + Re-selecting an object will de-select it from a set; ESCAPE will de-select the entire selection

clip_image010Figure 5 – (Left) Selection Window drawn left-to-right with start and end points marked. (Right) The grips (blue “dots”) of all objects selected are highlighted – Note the yellow rectangle in the lower right hand of the screen; although the selection window touched the rectangle, it did not enclose it, therefore it is not selected

 

clip_image012

Figure 6 – (Left) Selection Window drawn right-to-left with start and end points marked. (Right) The grips (blue “dots”) of all objects selected are highlighted – Again, note the yellow rectangle in the lower right hand of the screen; the selection window touched the box, therefore it is selected

 

Copying & Pasting Objects

Objects may be copied and pasted within a single drawing or copied from one drawing and pasted into another. In all cases, when objects are copied a base point is assigned. This base point is the “handle” of the copied object – you might think of it as the point at which the mouse is “holding onto” the object.

I. Copying Object(s) – clipboard

A. Select the object(s) to be copied

B. RC > Clipboard > Copy Options:

Copy = Objects are copied to the clipboard and a default base point is assigned; this base point’s location depends on how the object was created but it is often located in the lower left-hand corner of the object(s)

Copy with base point = Objects are copied to the clipboard and the user assigns the base point, which is useful when placing the object accurately

Assigning a base point – the base point may be assigned to a point on the object or to a point in the coordinate system. Using a point on the object is useful when you are trying to place objects and the coordinate system is not important. Using the coordinate system origin (0,0,0) is useful when copying object(s) from one coordinate system to another or when objects do not provide accurate references.

Point on Object > LC on a point on the object to specify it as the base point

Point in Coordinate System > When prompted to specify a base point, manually enter coordinates in the command line (#,#,# – digits separated by commas and NO spaces) > ENTER

NOTE: Copying by the origin point 0, 0, 0 (x=0, y=o, z=0) allows the user to paste by the same origin point; in this way, object(s) can be copied from one DWG and can be pasted into the exact same location in a different DWG regardless of differing scales or coordinate systems

II. Pasting Object(s) – clipboard

A. Paste by base point on the object(s) > RC > Clipboard > Paste > LC to place the object – the point that you LC represents the location that the base point of the copied object is placed

B. Paste by coordinates > RC > Clipboard > Paste options:

Paste to original coordinates = object is placed at the x, y, z coordinates of the original drawing that were defined as the base point by the user when the object was copied

Paste = Command line prompts for insertion point:

Enter coordinates in the command line (#,#,# – digits separated by commas and NO spaces) > ENTER

LC at point in new drawing > object(s) are placed according to the point that you specified as the base point in the original drawing with this LC

C. Paste as Block > Object(s) are converted into a single block/entity upon pasting; this is useful when copying complex drawings to keep everything together

NOTE: Blocks are groups or components that exist within a single drawing or multiple drawings. When copied or inserted, they retain the characteristics of the original block and when pasted, the block is placed on whatever layer is current; when one block is edited, all blocks are edited simultaneously.

NOTE: <EXPLODE> separates blocks into the individual, original objects (and their individual layers); when copying/placing multiple objects at the same time, pasting them as blocks helps to keep multiple objects grouped and positioned correctly until they are adjusted/placed, at which point they can be exploded

III. Copying/pasting within a single drawing – command line

A. Select the object(s) to be copied > command line: <CO> or <COPY>

B. Object is copied and remains visible and command line prompts the user to specify the base point > LC on a point on the original object to specify the base point

C. Dragging the mouse, moves the object to the point where the copied object is to be placed > LC on the point you want to paste the base point of the object

Posted in Checklist, Leica CloudWorx, Shortcut Guide, Workflow | Tagged , , , , ,

Leica Cyclone 7.1.1 – Creating Basic Geometries Tracing Topography (2D)

This workflow will show you how to create basic geometries and trace topography in Leica’s Cyclone software.
Hint: You can click on any image to see a larger version.

[wptabs style=”wpui-alma” mode=”vertical”]

[wptabtitle] DEFINITIONS[/wptabtitle]

[wptabcontent]Definitions: Vector Objects are geometrical primitives, such as lines, curves, and polygons, that can be used to represent a surface or feature. These geometries can be 3D or 2D, depending on how they are created. These objects can be exported and used as two dimensional drawings (for site plans or printed documentation) or as three dimensional objects (in CAD or GIS software). Creating lines two dimensionally through tracing, is highly dependent on the user’s interpretation and basically comes down to visually

Definitions: Break lines are lines or polylines that sub-divide topography into reasonably sized sections; they represent the edges of a paved surface, a ridge, a channel, or any other topographic feature that the user wants to preserve. In vector drawings, break lines consist of lines, curves, and splines. In meshes, break lines become the edges that the triangles in the mesh conform to, defining and controlling the smoothness and continuity of the mesh. Whether creating a vector drawing or a mesh, tracing the break lines and other lines making up the site is one of the first steps.[/wptabcontent]

Workflow for Tracing Topography and Site Elements – 2D: Please see ‘Lecia Cyclone: Interface Basics’ and ‘Beginner’ and ‘Advanced Workflows for Building Modeling’ for an introduction and an overview of topics not covered here.

[wptabtitle] OPEN A REGISTERED UNIFIED MODEL SPACE[/wptabtitle]

[wptabcontent]I. Open a Registered Unified Model Space -> Create fence around ground plane –> Right Click –> Copy Fenced to new Model Space (NOTE: viewing from a standard side, front, or back view in orthographic mode assists in selection) -> Original MS may be closed

clip_image002

Figure 1 – (Left) Original registered scan world of plaza (Right) Point Cloud Sub-Selection (Select -> Right Click -> Point Cloud Sub-Selection) allows unneeded points, such as trees and vertical surfaces, to be deleted; Sub-selection allows the user to precisely choose and view points before deciding to delete

 [/wptabcontent]

[wptabtitle] SELECT AND DELETE UNNEEDED POINTS[/wptabtitle]

[wptabcontent]II. In the new Working MS -> -> Select and delete unneeded points: it’s best to eliminate as much vertical surface data as possible so that the ground plane to be modeled is isolated. While tracing, deleting unneeded data is primarily used to clarify details being traced; it becomes more important when meshing.

clip_image004
Figure 2 – The same plaza as Figure 1 now copied to a working MS and “cleaned” of unneeded vertical surfaces and vegetation leaving only the ground plane to be modeled

 

III. Identify break lines and level of detail -> Create a new layer for the break lines (Access Layer Manager by Shift + L) and make this layer current (Highlight layer & click ‘Set Current’) -> Review the area to be modeled and identify areas where the surface changes and/or where you want a clean break or difference between adjacent surfaces. We will create lines in these areas. Create layers for primary and secondary features as needed depending on the complexity of site.[/wptabcontent]

[wptabtitle] SHOW THE ACTIVE REFERENCE PLANE[/wptabtitle] [wptabcontent]IV. Show the Active Reference Plane. In Cyclone, when using 2D Drawing Mode, the Active Reference Plane plane becomes the location along which all drawn objects are located. For example, in Top View, which we will be using, all objects have their z-coordinates on this plan. It is comparable to the piece of paper upon which a site is drawn by conventional methods.

NOTE: Commands are based on the active reference plane. There is always a plane active whether it is visible or not. Planes can be activated and edited in the RP Manager (Tools > RP > Add/Edit)

NOTE: Creating and placing 3D polylines/curves with an accurate z-coordinate is covered in the document ‘Leica Cyclone – Creating a Mesh and Modeling Surface Topography 3D’.

[/wptabcontent]

[wptabtitle] SHOW THE ACTIVE REFERENCE PLANE 2[/wptabtitle]

[wptabcontent]V. In general, we will trace along the edges of features, either where a feature meets the ground plane or where the ground plane changes. -> To place Reference Plane:

A. Tools –> Reference Plane –>Set to Viewpoint; this aligns the reference plane to the Top view. If you notice in the next image, the plane passes right through the middle of the site.

B. Move the plane to the base of the site -> Select a point at the lowest significant point on the site -> Tools –> reference Plane –> Set plane origin at pick point. This translates the plane down to the site base.

[/wptabcontent]

[wptabtitle] SELECT TOP VIEW[/wptabtitle]

[wptabcontent]VI. Select Top View -> All tracing will be done in top view; it is important not to rotate the view, if necessary choose View –> View Lock –> Rotate

VII. Select Orthogonal View -> Hot key ‘O’ (hot key toggles between orthogonal and perspective

clip_image006

Figure 3 – (Left) Setting the Reference Plane to align with the Top-down Viewpoint (Right) Moving the Reference Plane down to the lowest point on the site

 [/wptabcontent]

[wptabtitle] TRACE SITE FEATURES W/ 2D DRAWING MODE[/wptabtitle]

[wptabcontent]
VIII. Use 2D Drawing Mode to trace site features -> Select a drawing tool -> ->Trace the feature of interest -> Accept/Create the drawing by either clicking the green check button at the top of the Drawing Toolbar or RC -> Create. The 2D sketch should turn bright green with orange handle at the intersections. Edit the line/sketch as needed (see editing some editing options in following steps).

clip_image012

Figure 4 – (Left) Top view: two polylines (highlighted with orange handles) trace an upper stair and a lower stair. (Right) Perspective view showing that the stairs are located at different heights (different z-coordinates).

 

clip_image014

Figure 5 – A side view shows that the polylines representing the edges of the 2 stairs (in orange) are located at a single height/z-coordinate along the active reference plane (in green).
[/wptabcontent]

[wptabtitle] TRACING TIPS[/wptabtitle]

[wptabcontent]Tips – Object Types: Lines and polylines are open objects (with a beginning and an end); polygons are closed objects (with an area and a perimeter); curves can be created through picking a variety of points. Polylines are usually the best tool for fairly straight features. In general, it is best to have one polyline than multiple individual lines representing a feature. Any feature that you ultimately plan to extrude or model 3-dimensionally should be a closed object.

Tips: Use the Modes Toolbar clip_image010 to navigate within the space while drawing mode is active; while drawing, LC the hand icon to pan and LC the pencil icon to return to the current line/arc. Also use the Modes Toolbar to toggle between orthogonal and perspective views.

Tips – Accuracy: The level of detail/accuracy depend on many things including the ultimate needs, user interpretation, level of zoom and point width while drawing. It is recommended to choose and use a consistent method (same level of zoom, same point width) for similarly sized features to improve the chances of multiple users creating similar results in an area. With irregular features, often using a polyline with many vertices represents a curve more accurately than geometrically correct arcs.[/wptabcontent]

[wptabtitle] SNAP LINES AND POLYLINES TOGETHER[/wptabtitle]

[wptabcontent]IX. Snap ends of lines or polylines together -> Multi-pick two line (or polyline) segments to snap together -> With multi-pick, click and hold the handle of one of the lines -> Hold Shift while dragging the line toward the handle of the 2nd line until they snap together -> These lines are not joined, they remain separate entities

­clip_image016

Figure 6 – (Left) The base of a column has been traced with a polyline and a line; (Middle) The point cloud’s visibility is turned off showing the two independent objects; (Right) The ends of the 2 objects have been snapped together but they remain as independent entities

[/wptabcontent]

[wptabtitle] MERGE LINES OR POLYLINES TOGETHER[/wptabtitle]

[wptabcontent]X. Merge lines or polylines together -> With multi-pick mode, pick each of the lines to be merged -> Create Object -> Merge -> Multiple objects now become a single object -> Snapping the end handles together and merging creates closed objects.

NOTE: Only similar objects can be merged (i.e.: lines to lines, polylines to polylines); In some cases polylines disappear due to the order the polylines were selected; If the line disappears after merging, undo, and reverse the order the lines are selected

NOTE: Toggling the visibility of points and objects (Property Manager > Shift + L), and adjusting the visibility of the Reference Plane (Tools > RP > Add/Edit), can help clarify what you are drawing.

XI. Extend a line -> Pick line (with pick mode NOT multi-pick) -> Pick the end handle, hold and drag (line will extend in same 3D direction as original line) OR Pick the line, multi-pick a point in the point cloud that you want to extend the line to -> Edit Object -> Extend to last selection (the line will extend in the direction it was originally created)[/wptabcontent]

[wptabtitle] EXPORTING[/wptabtitle]

[wptabcontent]XII. Exporting: There are several options for exporting topographic features. The objects created through tracing can be exported or the point cloud itself can be exported.

A. Export the lines, polylines, arcs -> Use the properties manager (Shift + L) to turn off the visibility/selectability of the point cloud -> Select All -> File -> Export

  • 2D DXF R12 Format – all objects export on a single 2-dimensional plane (in this case the active RP) – use this setting for 2D applications
  • DXF R12 Format – 3D information is retained (in this case the z-coordinate for the lines) – use this setting if the objects have different z coordinates
  • Objects may also be exported as ASCII or XML file types here
  • Once exported, the .dxf file can be opened in CAD, imported into Sketchup, or converted for multiple software.

B. Newer versions of AutoCAD (2009 and beyond) support point clouds. If you do not have Cyclone but you do have CAD, use Cyclone to export the points and then trace/model in CAD software much as we have done in Cyclone. The main issue is file size; in general, point clouds must be broken into smaller pieces to allow them to be imported into CAD software. See the CAST workflow, ‘Reducing Point Clouds for Autodesk Applications’ for more details.

NOTE: In general, export .PTS file types with a maximum size of 4mb to import into CAD as of 2011

[/wptabcontent] [/wptabs]

Posted in Workflow | Tagged , , , , , , ,

Modeling an Irregular Feature from Point Cloud Data – Method 1

In this series, columns in a deteriorating colonnade will be modeled by several methods.
Hint: You can click on any image to see a larger version.

[wptabs style=”wpui-alma” mode=”vertical”]
[wptabtitle] IMPORTANT NOTE ON OBJECTS IN ACAD[/wptabtitle]

[wptabcontent]IMPORTANT NOTE ON OBJECTS IN ACAD: When using the following methods (EXTRUDE, SWEEPT LOFT, REVOLVE) you can create solids or surfaces. Whether the object is solid or surface is determined by 2 things:

(1) Whether the polyline is open or closed: Open polygons and curves always create surfaces but closed polylines and curves can create either.

(2) Which tab is active at the top of the ACAD workspace (See Figure 1) – If the Solid tab is active, a solid is created; if the Surface tab is active, a surface object is created; if the Home tab is active, by default, a solid is created from closed polylines (this default can be changed once a command is active by entering ‘M’ for Mode at the prompt) – Solids seem to translate into the COE format and import into Cyclone much better than surfaces, which sometimes will not show up in Cyclone at all in the COE format.

clip_image020

Figure 1 – Main ACAD toolbar with modeling tabs highlighted in Magenta; EXTRUDE, LOFT, REVOLVE, AND SWEEP exist on each of the modeling tabs. Which tab is active and whether the polyline is open or closed determines what type of object is created.

[/wptabcontent]

[wptabtitle] SET UP THE ACAD MODEL SPACE[/wptabtitle]

[wptabcontent]I. Set up the ACAD Model Space

1. Configure/Open the Cyclone MS  and set up the ACAD model space (See the GMV’s “Leica CloudWorx 4.2 and AutoCAD 2012 – Digitizing a Point Cloud in 2D” for more information)

2. Create Layers for each of the 3 slices and the final column object, making the bottom slice’s layer active

3. Adjust the object geometry association variable in CAD -> command line: DELOBJ > adjust to value zero

(NOTE: This number determines whether the original geometry used to create a 3D object – in this case polylines – are retained or deleted when the object is created. This value can range from ‘0’ through ‘-3’, with ‘0’ retaining all geometry and with ‘-3’ deleting all defining geometry. Retaining geometry is recommended for analysis and any possible back-tracking. Search “DELOBJ” in ACAD help for more information)

[/wptabcontent]

[wptabtitle] SETUP ACAD MODEL SPACE – CONT.[/wptabtitle]

[wptabcontent]

4. Hide Regions to isolate first feature to be modeled, in this case, the colonnade.

clip_image022

Figure 2 Top view of colonnade – roof and other un-needed data has been hidden with Hide Regions; colonnade (highlighted in the magenta rectangle) is easily evaluated for complete/intact features

[/wptabcontent]

[wptabtitle] EXAMPLE[/wptabtitle]

[wptabcontent]Method 1: Slicing Cross Sections and Lofting – Point Clouds may be sliced to view specific sections or profiles to model; these slices are made across the x, y, or z axis based on the current UCS. Slicing large areas is useful when looking to see which features are the most intact/complete across a site. Slicing small areas allows for precise drawing/modeling. In this method, a column will be sliced and the section’s profile (ie: cross sections) will be traced and lofted into solid or surface objects.

In the first example, the column is sliced 3 times on the y-axis. Tracing the slices creates 3 polylines representing the section cuts from the bottom, middle, and top sections of the columns -> These cross sections will then be lofted to one another, forming the modeled column object.[/wptabcontent]

[wptabtitle] USE THE CLOUDWORX SLICE TOOLBAR[/wptabtitle]

[wptabcontent]II. Use the CloudWorx Slice Toolbar clip_image024 to slice the point cloud along the y-axis as the first section cut at the bottom of the column. Slices can be made in several ways.  NOTE: a slice must be named in the Cutplane Manager (main Cloudworx toolbar) in order to be saved; if unnamed, it will be deleted once it is deactivated!!!

1. Clip Point to Slice -> uses two parallel planes to define the slice; only those points between the two planes are shown > Command line: CWSLICE or Cloudworx > Clip Point Cloud > Slice > Define Axis > LC in viewport to place 1st and 2nd clipping planes.  NOTE: Current Slice can be moved one step (equal to width of the slice) forward (CWSLICEF) or backward (CWSLICEB) along its axis

[/wptabcontent]

[wptabtitle] CLIP POINTS TO SECTION[/wptabtitle]

[wptabcontent]

2. Clip Points to Section > uses single plane with only those points on one side of the plane visible > Command line: CWSECTION or Cloudworx > Clip Point Cloud> Section View > LC in viewport to define axis and direction (positive or negative)

clip_image026

Figure 3 – (Left) Hide Regions command has isolated colonn cut; (Right) Sliced colonnade – note the points are not deleted, simply hidden ade – Clip-to-Slice command (slices shown by magenta lines) creates the first section

[/wptabcontent]

[wptabtitle] TRACE AND SNAP TO POINT CLOUD[/wptabtitle]

[wptabcontent]

3. Trace first section cut, snapping directly to point cloud > Top View > Make correct layer current > Zoom in to view points comfortably and refresh point cloud to confirm all points visible clip_image028 (do this periodically, especially when zooming in/out) >Command Line: PL or POLYLINE to trace the profile/outline of the bottom slice of the column

REMEMBER: (1) Enable OSNAP “NODE” to snap to points (2) At command line: type “U” during active polyline command to undo vertices/go back without ending the command (3) At command line: PEDIT allows polyline(s) to be edited, joined, etc. (4) See ACAD Help: Drawing and Editing Polylines for more information

3A. Repeat Steps to create the polylines for the remaining section cuts (note you may create more or less slices as desired – a higher number of slices and/or increased complexity results in a more accurate tracing but also requires a higher and more difficult set of calculations to loft/join the separate sections; when too complex, the lofting command may fail to calculate the final object as desired) > Confirm each section is made of only one polyline (use PEDIT -> JOIN and PEDIT -> CLOSE to join and close multiple polylines)

3B. Once all section cuts are traced, hide the visibility of the point cloud clip_image029 -> make the column object layer current

[/wptabcontent]

[wptabtitle] 3D MODELING MAIN TOOLBAR[/wptabtitle] [wptabcontent]III. 3D Modeling main toolbar > EXTRUDE tab pulls down to LOFT (see figure 4) or at command line: LOFT > Select cross sections in the order they are to be lofted to one another (here from bottom to top) > ENTER after selection > ENTER a 2nd time to accept “Cross Sections Only”

clip_image031

Figure 4 Extrude icon pulls down to reveal Loft and Revolve Icons in the 3D Modeling Workspace or enter LOFT at command line

[/wptabcontent]

[wptabtitle] RETURN TO ORIGINAL COORDINATES AND EXPORT[/wptabtitle] [wptabcontent]IV. Return to Original Coordinates and export > If you have altered the Coordinate System, return it to the World Coordinate System that matches the original scan world coordinates (See the section: ‘Setting up a Model Space in AutoCAD: Using User-defined coordinate systems’ and Figure 4 for more information) > Modeled object can now be edited or exported as desired > Select objects > File > Export[/wptabcontent]

[wptabtitle] TIPS FOR LOFTING[/wptabtitle] [wptabcontent]TIPS for lofting: Attempting to loft more than 2 complex cross sections, such as highly detailed tracing, may freeze the program or take large amounts of time. If this happens, try lofting 2 polylines at a time to create separate objects and then grouping or converting these separate objects into a single surface or solid object as needed. Retaining the original geometry (ie: the polylines through DELOBJ) and using layers is essential to dividing the lofting command into manageable pieces; to join multiple objects -> NOTE: although multiple objects can be joined together into a single object with the UNION command, objects that have been united do not seem to translate into COE files well; if you have used the UNION command and find importing the COE file into Cyclone is slow or unsuccessful, try the original objects, pre-UNION (this is another case in which retaining your original geometry is very helpful).

clip_image033

Figure 5 (Left) Each section cut has been traced as one polyline and the point cloud is hidden; 1st the bottom polyline and 2nd, the middle polyline are selected for lofting (the magenta arrow highlights the order of selection and lofting direction) (Right) 2 polylines create the first lofted object (ie: the bottom of the column)

[/wptabcontent]

[wptabtitle] EXAMPLES OF LOFTING[/wptabtitle] [wptabcontent]

clip_image035

Figure 6 (Left) The column has been completely lofted into 2 objects (in white) and the original polyline geometry is still present and visible (blue and gray circles) – (Right) UNION has combined the 2 pieces of the column and the point cloud’s visibility is turned on for comparison

[/wptabcontent]
[/wptabs]

Posted in Leica CloudWorx, Workflow | Tagged , , , , , , , ,

Leica Cyclone 7.1.1 : Importing Data into Cyclone

[wptabs style=”wpui-alma” mode=”vertical”] [wptabtitle] CYCLONE PREFERENCES: POINT CLOUD

[/wptabtitle]

[wptabcontent]

Importing Data into Cyclone:

Before importing data (especially data that is imported from onboard collection), it is important to set Cyclone’s preferences to ensure normals and color are imported simultaneously with data.  First Go to Edit/Preferences/Point Cloud Tab. After Cyclone 7.1.1, some of these preferences are the default setting.  Change the level from “Session” to “Default”.  Then check the following preference settings:

a) Work Directory – This is the location from which the computing power/memory space will be     taken. It is recommended that you direct this working directory to the largest drive with the most free space possible on your computer

b) Load: Max Points (Millions) – This determines how many points are loaded upon opening a ModelSpace. The larger the number, the more data that is loaded which greatly increases the required computing power and rendering time. It is recommended to leave this at default settings until you determine the needs in your data and the capabilities of your computer.

c) Display: Max Points (Millions) – This determines how many points are displayed upon opening a ModelSpace. The larger the number, the more data that is displayed. Again, this greatly increases the required computing power and rendering time. It is recommended to leave this at default settings until you determine the needs in your data and the capabilities of your computer.

d) Scan: Compute Normals During Scan: Yes

e) Scan: Compute Colors During Scans: Yes

f) Import: Compute Missing Normal Vecotrs: Yes

g) Import: Compute Colors During Importing: Yes

h) Display: Treat (0,0,0) Colors as Missing: No

e) All normals setting should be set to Yes

NOTE: If these preferences were NOT set before import, data that has been collected onboard must have multi-image applied after import.  These preferences slow the import but it is less demanding on the user and comparable in time to applying the multi-image command later.[/wptabcontent]

[wptabtitle] IMPORT MORE SCANS[/wptabtitle] [wptabcontent]

  1. Add scans to your project by using the import command (either File, Import C10 Data or RC’ing the project, Import C10 data).  It is important to import C10 data specifically (versus generic ‘Import Data) to ensure all data imports correctly.
  2. Select the Project Level Folder of your project (immediately within the default Scanner-Projects folder).  If you receive the warning ‘Invalid Folder Selection’, confirm that you are selecting the project level (this folder contains Station_### folders, a ControlPoints.ini, and a project.ini).
  3. Select Import.  If preferences were not set in Step 1 and you are prompted, DO estimate normals and DO NOT subsample the data (unless file sizes become an issue).
  4. All of the imported scans will be listed as ScanWorlds in Navigator.  They can be viewed by RCing and selecting Open TruSpace (view is a panoramic view and is constrained to scanner position) or by expanding down the menu and opening the ModelSpace (not constrained to scanner position).  NOTE: If message specifically lists scan(s) and states “Unable to import: Scan #” this usually means that the scan itself is corrupted.  Move the scan to a different folder and re-try the import.  Sometimes the erroneous scan can be fixed (for example, when 2 scans have been collected within the same station, manually remove one scan and re-attempt importing)
  5. It is recommended that you inspect all scan worlds and targets in detail immediately after import.

[/wptabcontent]

[wptabtitle] CONTINUE TO…[/wptabtitle]

[wptabcontent] Continue to Cyclone 7.1.1: Registering Scans in Cyclone[/wptabcontent]
[/wptabs]

Posted in Cyclone, Scanning, Software, Workflows | Tagged , , , , , ,

Leica C10: Setting Up The C10

This workflow will show you how set up the Leica C10 Laser Scanner prior to beginning your scanning project.
Hint: You can click on any image to see a larger version.

[wptabs style=”wpui-alma” mode=”vertical”]
[wptabtitle] INSTRUMENT’S COMPONENTS[/wptabtitle]

[wptabcontent]


[/wptabcontent]

[wptabtitle] SUGGESTED EQUIPMENT CHECKLIST[/wptabtitle]

[wptabcontent]

  1. Scanner and equipment as listed above
  2. Tripod
  3. Batteries: taking 4-6 fully-charged batteries is usually appropriate for a day’s scanning while external chargers and/or extension cords are suggested for longer projects.  Ensure that batteries are fully charged and healthy the day before the project begins.
  4. Targets and Target numbers if applicable
  5. USB stick (that is updated and cleaned of any infections/problems) if scanning onboard
  6. Laptop (with updated virus protection and adequate free space), Ethernet cable, external laptop battery, and mouse if scanning to the computer or processing data

[/wptabcontent]

[wptabtitle] PRE-SCANNING CHECK[/wptabtitle]

[wptabcontent]

  1. Do not scan in rain, snow or fog
  2. Protect scanner from excess moisture and rain
  3. If temperatures are outside the calibrated range, an error message will display. Measuring accuracy cannot be specified if scanning proceeds
  4. If the scanner is taken from a cold environment into a warm, humid one, the glass window and even the optic can fog up causing measurement error as does dust, and fingerprints
  5. Always check to verify that the lens is perfectly clean.

[/wptabcontent]

[wptabtitle] C10 SCANNER SETUP[/wptabtitle]

[wptabcontent]

  1. Set up tripod as stable and level as possible – Never use the scanner without the tripod
  2. Remove instrument from case with both hands, grasping the top with one hand on the handle and reaching under the base with the other in the space provided. Refrain from using the tribrach (base mount) to lift with as it can turn unexpectedly
  3. Place instrument on tripod and secure – Always keep one hand on the handle until the scanner is firmly attached to the tripod
  4. Level using the leveling screws and built-in bubble level

[/wptabcontent]
[wptabtitle] MEASURE INSTRUMENT HEIGHT[/wptabtitle]

[wptabcontent]
To get an accurate height measurement use the GHM008 instrument height meter in conjunction with the GHT196 distance holder which are both included with the scanner.

  1. Place tripod centrally over the ground point, level instrument.
  2. Click GHT196 distance holder to tribrach. It must “snap” onto the cover over an adjusting screw.
  3. Unfold measuring tongue, pull out tape measure a little.
  4. Insert GHM008 instrument height meter in the distance holder and attach.
  5. Swivel measure in the direction of the ground point, pull out until the tip of the measuring tongue touches the point on the ground, keep under tension and do not allow to sag, clamp if necessary.
  6. Read height of the instrument (ground – tilt axis) in the reading window at the red marking (in the example 1.627 m).

[/wptabcontent]

[wptabtitle] SCANNING WITHOUT A COMPUTER / ‘ONBOARD'[/wptabtitle] [wptabcontent]

  1. Position the scanner in the center of the target field
  2. Level the scanner using the physical bubble
  3. Turn on the instrument by pressing the ‘Big silver button’ once
  4. The scanner will boot in ~90 seconds.

    [/wptabcontent]

    [wptabtitle] ONCE BOOTED YOU WILL SEE THIS SCREEN…[/wptabtitle] [wptabcontent]

    [Idle State] should be displayed in command line

    Click ‘Status’

    Click ‘Level & Ls Plummet’[/wptabcontent]

    [wptabtitle] LEVEL THE SCANNER[/wptabtitle]

    [wptabcontent]First level the scanner using the physical bubble. Then try to level the scanner as best as possible using the digital bubble.

     

    Within the ‘Plummet’ menu you can turn the laser plummet on or off

    Within the ‘Compensator’ menu ensure the compensator is turned on. Return to the main menu by pressing the ‘x’ in the top right corner. (At this time you can remove the standard handle if data collected above the scanner is important to the project).[/wptabcontent]

    [wptabtitle] CONTINUE TO…[/wptabtitle]

    [wptabcontent]Continue to part 2 of the series, Leica C10: Starting a New Project [/wptabcontent][/wptabs]

Posted in Leica C10, Scanning, Setup Operation, Workflows | Tagged , , , , , , , , ,

Leica Cyclone 7.1.1 : Interface Basics

[wptabs style=”wpui-alma” mode=”vertical”] [wptabtitle] DEFINITION OF TERMS[/wptabtitle] [wptabcontent]

What is a…..

ScanWorld: A single scan or collection of scans that are aligned to a common coordinate system. Scanworlds contain ControlSpaces and ModelSpaces.

ControlSpace : Contains the constraint information used to register multiple scans together.

ModelSpace: Contains information from the database that has been modeled, process, or changed in some way.

ModelSpace
Views: Where you make any/all changes on a point cloud or create 3D models.

TruSpace
: New to Cyclone 7. Truspaces are views that are constrained to individual scanner locations. Note you cannot PAN a view in a truspace. Once a series of scans are aligned, you can view their individual TruSpaces and also jump between them from within the registered ModelSpace. You can also do basic measurements and extract targets in TruSpaces. To view a scans TruSpace, simply RC on the scan (Scanworld) and select Open TruSpace.

[/wptabcontent]

[wptabtitle] HIERARCHY OF OBJECTS[/wptabtitle]

[wptabcontent]

The Hierarchy of Objects in the Cyclone Navigator

  • Servers contain Databases
  • Databases contain Projects
  • Projects contain:

    – ModelSpaces
    – ScanWorlds
    – Registrations
    – Images
    – Imported Files
    – Other (subordinate) Projects

  • ModelSpaces contain ModelSpace Views


Figure 1: The Cyclone Hierarchy

[/wptabcontent]

[wptabtitle] SETTING UP A NEW DATABASE/PROJECT[/wptabtitle]

[wptabcontent]

Setting up a new database/project in Cyclone

  1. By default, Cyclone is set up to recognize the machine that you are working on as the default Cyclone server. To add more servers, in the Cyclone Navigator go to Configure – Servers.
  2. To set up a new database, select Configure – Databases. Under the Server dropdown select your_machine_name (unshared). It is highly recommended that you use the ‘Unshared’ Server at all times.  Using the ‘Shared’ server requires more time for importing and processing and sometimes causes problems with importing files.  You can hide the ‘Shared’ server by selecting Configure, Server, and un-checking the visibility.  For more information on using shared servers, please refer to the Cyclone Help. A database is a container file for all information relevant to a specific project. It is best to create a unique database for each individual project that you may be working on. This facilitates data handling and transferability in the future. The database file is an .imp file that should be stored in a root project directory where the scan files are also located. Create and store a new database in your designated project directory.
  3. Once a new database has been created for your project, Close the Configure Databases dialog.
  4. Now to create a project within the database, select the Database Name in the Cyclone Navigator and select Create – Project . Rename the project folder as necessary.

[/wptabcontent]

[wptabtitle] IMPORTING DATA INTO CYCLONE [/wptabtitle] [wptabcontent]

Importing Data into Cyclone

For details on importing data into Cyclone, see Leica Cyclone 7.1.1 : Importing Data into Cyclone

[/wptabcontent]

[wptabtitle] TRANSFERRING AND OPTIMIZING DATABASES[/wptabtitle]

[wptabcontent]

Transferring & Optimizing Databases (for shared users or when backing up data)

  1. In Cyclone Navigator, RC on database.  Optimize.  Configure and ‘Remove’.  Do not destroy!!!  This destroys all files created.
  2. In Windows, copy all files including the Database folder (containing the .impfile, pcesets, eventlog folder, and recovery folder) and the Scan Folder (containing the Stations/scans, the ControlPoints.ini, and the project.ini).
  3. In new location, copy and configure the database as usual.
  4. Optimizing an old database is necessary when Cyclone is updated with firmware and/or software. To optimize and old database (created before the update), configure in Cyclone navigator, RC and optimize.  This is sometimes a lengthy process, but it should update old databases to the current version of Cyclone.

[/wptabcontent]

[wptabtitle] BASIC CYCLONE TOOLBARS[/wptabtitle]

[wptabcontent]

Basic Cyclone Toolbars (in ControlSpace and ModelSpace Viewers)


Figure 2: Cyclone Toolbars with common tools

[/wptabcontent]

[wptabtitle] COMMON SHORTCUT KEYS[/wptabtitle]

[wptabcontent]

Common Shortcut Keys

S : Seek
~ : Single Pick/Navigate

Shift + ~:
Multi Pick Mode
hift + L: Layers List

[/wptabcontent] [/wptabs]

Posted in Beginner, Shortcut Guides, Workflows | Tagged , , , , , ,