Placing virtual objects in AR-Kit

Sai Balaji
6 min readAug 26, 2023

In previous article we saw how to detect horizontal plane surfaces in AR-Kit using ARWorldTrackingConfiguration plane detection. Now we are going to see how to place virtual objects on the plane horizontal surfaces detected in real world.

Getting started

  • Create an AR-Kit app with Scene-Kit
  • To add a virtual object first we need a 3D model of the virtual object. Here I’m using the teapot model provided by Apple.
  • Drag the 3D model into the art.scnassets group. Now select the .usdz file and go to Editor->Convert to SceneKit file format(.scn) this will create a .scn file from the .usdz file which we wil l be loading into our app.
  • In ViewController.swift file first we need to detect the horizontal planes using ARSCNViewDelegate delegate.
  • In ViewController.swift add the following code.

extension ViewController: ARSCNViewDelegate{

func renderer(_ renderer: SCNSceneRenderer, didAdd node: SCNNode, for anchor: ARAnchor) {
print("Add")
guard let planeAnchor = anchor as? ARPlaneAnchor else{return }
self.createPlane(planeAnchor: planeAnchor, node: node)
}
func renderer(_ renderer: SCNSceneRenderer, didUpdate node: SCNNode, for anchor: ARAnchor) {
guard let planeAnchor = anchor as? ARPlaneAnchor else{return }
print(planeAnchor.center)
print(planeAnchor.extent)
self.updatePlane(planeAnchor: planeAnchor, node: node)
}
func renderer(_ renderer: SCNSceneRenderer, didRemove node: SCNNode, for anchor: ARAnchor) {
node.enumerateChildNodes { childNode, _ in
childNode.removeFromParentNode()
}

}

func createPlane(planeAnchor: ARPlaneAnchor,node: SCNNode){
let planeGeomentry = SCNPlane(width: CGFloat(planeAnchor.extent.x), height: CGFloat(planeAnchor.extent.z))
planeGeomentry.materials.first?.diffuse.contents = #colorLiteral(red: 0.3411764801, green: 0.6235294342, blue: 0.1686274558, alpha: 0.6022350993)
planeGeomentry.materials.first?.isDoubleSided = true
planeNode = SCNNode(geometry: planeGeomentry)
planeNode.position = SCNVector3(x: planeAnchor.center.x, y: 0.0, z: planeAnchor.center.z)
planeNode.eulerAngles = SCNVector3(x: Float(Double.pi) / 2, y: 0, z: 0)
node.addChildNode(planeNode)
}

func updatePlane(planeAnchor: ARPlaneAnchor, node: SCNNode){
if let planeNode = node.childNodes.first{
if let planeGeomentry = node.childNodes.first?.geometry as? SCNPlane{
planeGeomentry.width = CGFloat(planeAnchor.extent.x)
planeGeomentry.height = CGFloat(planeAnchor.extent.z)
planeNode.position = SCNVector3(x: planeAnchor.center.x, y: 0.0, z: planeAnchor.center.z)
}
}
}
}

And make sure your have set the plane detection property in ARWorldTrackingConfiguration

   override func viewWillAppear(_ animated: Bool) {
super.viewWillAppear(animated)

// Create a session configuration
let configuration = ARWorldTrackingConfiguration()
configuration.planeDetection = .horizontal

// Run the view's session
sceneView.session.run(configuration)

}

We have already covered plane detection in AR-Kit in previous article. In short the above code simply visualizes the horizontal plane detected by the AR-Kit by rendering a simple flat 2D plane.

Performing Ray-Cast to detect the position and orientation of the surface

Ray-casting in AR-Kit involves projecting an imaginary line from the device’s camera into the virtual world created by the AR application. When the ray intersects with the objects in the virtual world it can be used to obtain the position and orientation of the surface. In our case when the ray intersects with our flat horizontal plane we are going to place our virtual object on the intersecting point of the ray on flat plane detected by AR-Kit plane detection .

  • First we need to add UITapGestureRecognizer to our ARSCNView which is similar to adding tap gesture in any other UIView
  • In viewdidLoad() method add the following line of code.
sceneView.addGestureRecognizer(UITapGestureRecognizer(target: self, action: #selector(tapped)))
  • Then add an objc function in the ViewController.swift which will be called when ever the user taps on the screen.
   @objc func tapped(recognizer: UITapGestureRecognizer){
let tappedLocation = recognizer.location(in: self.sceneView)
let hitResult = sceneView.session.raycast(sceneView.raycastQuery(from: tappedLocation, allowing: .estimatedPlane, alignment: .horizontal)!)
if !hitResult.isEmpty{
self.addTeaPot(result: hitResult.first!)
}
}

Here we first get the the exact location on the screen where the user tapped. We use it as origin point for our ray cast. Then we perform actual ray cast by calling raycast(_:) . It takes raycastQuery(from:allowing:alignment:) parameter. It creates a raycast query that originates from a point on the view, aligned with the center of the camera’s field of view. It takes the following parameter.

  • from This parameter represents the starting point of the ray. In our case it the position where the user tapped on the screen.
  • allowing This paramter refers to the type of object the ray is allowed to intersect with. In our case it is the estimatedPlane which is the planes detected by the AR-Kit.
  • alignment This paramter denotes if the target is parallel or perpendicular to the gravity. In our case our plane is perpendicular to the gravity so it is .horizontal

The raycast returns an array of all possible ARRaycastResult which contains information about raycast result. We check is the ARRaycastResult is not empty and call a custom function inside which we will be loading and placing our 3D model.

Loading an placing 3D model

In ViewController.swift create the following properties

private let scene = SCNScene()

The teaPotNode is a SCNNode for our teapot model. The scene is a SCNScene for our ARSCNView scene.

In viewdidLoad() add the following code

 override func viewDidLoad() {
super.viewDidLoad()

// Set the view's delegate
sceneView.delegate = self

// Show statistics such as fps and timing information
sceneView.showsStatistics = true
sceneView.debugOptions = [.showFeaturePoints,.showWorldOrigin]
// Create a new scene

// Set the scene to the view
sceneView.autoenablesDefaultLighting = true
sceneView.scene = scene

// sceneView.delegate = self
sceneView.addGestureRecognizer(UITapGestureRecognizer(target: self, action: #selector(tapped)))
}

Here we set some debugOptions for visualizing feature points detected by plane detection and a coordinate gizmo to visualize the coordinate system. Then we set autoenablesDefaultLightingproperty as true whichh automatically adds and places an omnidirectional light source when rendering scenes that contain no lights or only contain ambient lights.

Then inside ViewController.swift create a cusotm function names addTeaPot(result: ARRaycastResult) and add the following code.

func addTeaPot(result: ARRaycastResult){
let teaPotScene = SCNScene(named: "art.scnassets/teapot copy.scn")!
let teaPotNode = teaPotScene.rootNode.childNode(withName: "teapot", recursively: true)!
teaPotNode.position = SCNVector3(x: result.worldTransform.columns.3.x, y: result.worldTransform.columns.3.y, z: result.worldTransform.columns.3.z)
teaPotNode.scale = SCNVector3(x: 0.002, y: 0.002, z: 0.002)
scene.rootNode.addChildNode(teaPotNode)
}

Here we first load our teapot SCNScene from the teapot copy.scn file in assets. Then we get hold of the actual teapot node from the SCNScene. Then we set the position and scale of the tea pot SCNNode . The position is set to the position of given by the ARRaycastResult The worldTransform contains the transformation matrix that describes the position, rotation, and scale of the hit point in the real world.

An example for transformation matrix is

|  a  b  c  x |
| d e f y |
| g h i z |
| 0 0 0 1 |

Here top left 3x3 ((a,b,c),(d,e,f),(g,h,i))matrix represents rotation and scale. The first 3 elements in the last column (4th column with index 3) represents x, y and z positions. Which we will be using to place our 3D model hence we use result.worldTransform.columns.3.x which means take the first elements (x position) from the column with index 3 in the transformation matrix.

Then we set the scale of the model so that it is not too big and finally add our model to our SCNScene of the ARSCNView

Now run the app and ARKit will try to detect the horizontal plane first. After detecting the plane tap on the screen to place the 3D object in real world.

--

--