SSM Using Scalismo (4): Generalized Procrustes analyses

In this post, I am going to implement the partial and the full generalized Procrustes analyses (partial GPA and full GPA) with Scalismo. Before getting into the implementation, I may want read this post explaining the theory of GPA’s.

Partial GPA

Through partial GPA, more than 2 configurations/meshes get fitted/superimposed onto a common (and initially unknown) mean configuration. The procedure is iterative and involves translation and rotation of the meshes.

GPA’s need meshes in correspondence; here, I load the femoral head meshes that I previously had their correspondence established through the non-rigid registration (this post). Initializing and loading the meshes:

This post contains code blocks; if your browser does not automatically display the code blocks, please click on the displayed dark narrow rectangular areas to display the containing code.

object GPA extends App {
  import scalismo.geometry._
  import scalismo.common._
  import scalismo.ui.api._
  import scalismo.mesh._
  import scalismo.io.{StatisticalModelIO, MeshIO}
  import scalismo.statisticalmodel._
  import scalismo.registration._
  import scalismo.statisticalmodel.dataset._
  import java.awt.Color

  scalismo.initialize()
  implicit val rng = scalismo.utils.Random(42)

  val ui = ScalismoUI()
 // Loading data
  val drive = "I:/"
  val meshDir = drive + "registration/Registered/"
  val meshFiles = new java.io.File(meshDir).listFiles()
  meshFiles.foreach(file => println(s"\n$file"))
  println(meshFiles)

Visualizing the meshes:

val (meshes, meshViews) = meshFiles.map(meshFile => {
    val mesh = MeshIO.readMesh(meshFile).get
    val meshView = ui.show(dsGroup, mesh, "mesh")
    (mesh, meshView) // returns a tuple of the mesh and the associated view
  }) .unzip  //unzip the tuple into two separate lists

  // setting some colors
  meshViews(0).color = new Color(85,235,40)
  meshViews(1).color = new Color(20,40,250)
  meshViews(2).color = new Color(250,150,20)
Fig. 1. Meshes and views of their cross-sectional contours.

Here, methods are defined to calculate the centroid (center of mass) of a mesh and centering the mesh. This is not necessary for GPA, as it involves translation. Also, GPA method in Scalismo, calculates the centroid of a given reference mesh, and uses it as the center of rotation of the meshes through the process. It does not harm if all the meshes are initially centered. Also, it is consistent with the GPA algorithm explained in the related post.

def CM (mesh:TriangleMesh[_3D]): EuclideanVector3D = {
    val numOfPoints = mesh.pointSet.numberOfPoints
    var total = EuclideanVector3D(0,0,0)
    val meshPoints = mesh.pointSet.points //iterator
    while (meshPoints.hasNext){ //iterator
      total = total + (meshPoints.next()).toVector
    }
    total*(1d/numOfPoints)
  }

  val centeredMeshes = meshes.map(mesh => {
    val translationVec = CM(mesh)*(-1)
    val translation = TranslationTransform[_3D](translationVec)
    mesh.transform(translation)
  })

To do GPA’s with Scalismo, a DataCollection should be defined. A DataCollection needs a reference mesh and a sequence of meshes; all in correspondence. A GPA will be performed on the sequence of the meshes. The reference mesh is used as the domain of transformations. A mean mesh, in Scalismo, is defined as the mean deformations (vectors) of the meshes from the reference mesh, added to the reference mesh; it should be remembered that all the meshes are in correspondence. It does not mater which, but, one of the meshes should be selected as the reference mesh.

  // General Procrustes Analysis: Aligns a set of meshes with respect to some unknown “mean” shape (independent of ordering of shapes)
  // setting a reference mesh. it is needed for data collection but no effect on the GPA
  val refMesh = centeredMeshes(2)
  val allMeshes = centeredMeshes
  val dataClctn = ((DataCollection.fromMeshSequence(refMesh,allMeshes))._1).get
  // to check how many meshes are in the data collection
  println(dataClctn.size)

Performing the partial GPA (number of iterations = 10, precision of average point-wise distance between meshes (halt distance) = 1e-5):

// doing the partial GPA
  val gpaDataClctn = DataCollection.gpa(dataClctn ,10,1e-5)

Retrieving the mean mesh and visualization:

// retrieving the mean mesh
  val meanMesh = gpaDataClctn.meanSurface
  //CM(meanMesh) // to check whether the mean mesh is centered or not
  val meanGroup = ui.createGroup("meanMesh")
  val meanMeshViewg = ui.show( meanGroup, meanMesh,"meanMesh")
  meanMeshViewg.color = new Color(245,20,50)
Fig. 2. Mean mesh (red) and views of cross-sectional contours of the mean and the original meshes.

The transformed versions of the original meshes onto the mean mesh are called the partial Procrustes fits. They can be retrieved and viewed:

//aligning by translation and rotation all the meshes onto the mean mesh
  val meanMeshPoints = meanMesh.pointSet.points.toIndexedSeq

  val gpaTransforms = for (mesh <- allMeshes) yield {
    val meshPoints = mesh.pointSet.points.toIndexedSeq
    LandmarkRegistration.rigid3DLandmarkRegistration(meshPoints.zip(meanMeshPoints), center = Point3D(0, 0, 0))
  }
  val gpaGroup = ui.createGroup("partial Gpa meshes")
 //partial GPA fits
  val transformedMeshes = (allMeshes.zip(gpaTransforms)).map(z => {
    val mesh=z._1
    val transToMean = z._2
    mesh.transform(transToMean)
  })
  val  transfrmdMeshViews = transformedMeshes.map(mesh => ui.show(gpaGroup, mesh, "trMesh"))
Fig. 3. The original and the partial-GPA fits.

The centroid size (see this post) of each mesh in this example can be calculated by this method:

def centroidSize (mesh:TriangleMesh3D): Double ={
    var total = 0d
    val meshPoints = mesh.pointSet.points //iterator
    val centroid = CM(mesh) // is a vector here
    while (meshPoints.hasNext){ //iterator
      total = total + scala.math.pow(((meshPoints.next().toVector - centroid).norm),2)
    }
    scala.math.sqrt(total)
  }

For example:

  centroidSize(TransformedMeshes(0))
  centroidSize(allMeshes(0))
  centroidSize(MeanMesh)

The centroid sizes are (unit of length):
original mesh 1: 487.222, mesh 1 partial-GPA fit: 487.222
original mesh 2: 692.725, mesh 2 partial- GPA fit : 692.725
original mesh 3: 566.668, mesh 3 partial- GPA fit : 566.668
mean mesh: 577.732

As expected, the original and their partial-GPA fits have similar sizes due to no-scaling in the partial GPA. However, the mean mesh must have a different size; as it has.

Full GPA

The full GPA entails scaling, translation and rotation of the meshes in order to superimpose each mesh onto the (unknown) mean mesh. Scalismo does not currently have a method performing the full GPA, but its partial gpa can be manipulated to do the full GPA. To this end, we should access the package scalismo.statisticalmodel.dataset in the file DataCollection.scala to copy the needed pieces of codes regarding the partial GPA method. This is already done as below. The copied lines of code are used to define new functions methods. The only difference between the method doing the full GPA and the partial one, is that similarity3DLandmarkRegistration method in the former one instead of rigid3DLandmarkRegistration in the latter one.

import scalismo.utils.Random
  def fGpa(dc: DataCollection, maxIteration: Int = 3, haltDistance: Double = 1e-5)(implicit rng: Random): DataCollection = {
    fGpaComputation(dc, dc.meanSurface, maxIteration, haltDistance)
  }
  def fGpaComputation(dc: DataCollection, meanShape: TriangleMesh[_3D], maxIteration: Int, haltDistance: Double)(implicit rng: Random): DataCollection = {

    if (maxIteration == 0) return dc

    val referencePoints = dc.reference.pointSet.points.toIndexedSeq
    val numberOfPoints = referencePoints.size
    val referenceCenterOfMass = referencePoints.foldLeft(Point3D(0, 0, 0))((acc, pt) => acc + (pt.toVector / numberOfPoints))

    val meanShapePoints = meanShape.pointSet.points.toIndexedSeq

    // align all shape to it and create a transformation from the mean to the aligned shape
    val dataItemsWithAlignedTransform = dc.dataItems.par.map { dataItem =>
      val surface = dc.reference.transform(dataItem.transformation)
      val transform = LandmarkRegistration.similarity3DLandmarkRegistration(surface.pointSet.points.toIndexedSeq.zip(meanShapePoints), referenceCenterOfMass)

      DataItem("gpa -> " + dataItem.info, Transformation(transform.compose(dataItem.transformation)))
    }

    val newdc = DataCollection(dc.reference, dataItemsWithAlignedTransform.toIndexedSeq)
    val newMean = newdc.meanSurface

    if (MeshMetrics.procrustesDistance(meanShape, newMean) < haltDistance) {
      newdc
    } else {
      fGpaComputation(newdc, newMean, maxIteration - 1, haltDistance)
    }
  }

Let’s compute the mean mesh and then superimpose (by the similarity transformations) the meshes onto the mean mesh; the code is:

val fGpaDataClctn = fGpa(dataClctn ,20,1e-5)
  val fGpaMeanMesh = fGpaDataClctn.meanSurface
  //CM(meanMesh) // to check whether the mean mesh is centered or not
  val fGpaMeanGroup = ui.createGroup("fullGpaMeanMesh")
  val fGpaMeanMeshView = ui.show( fGpaMeanGroup,fGpaMeanMesh ,"fullGpaMeanMesh")
  fGpaMeanMeshView.color = new Color(245,20,50)

  val fGpaGroup = ui.createGroup("Full Gpa Meshes")
  val fGpaMeanMeshPoint = fGpaMeanMesh.pointSet.points.toIndexedSeq

  val fullGpaTransforms = for (mesh <-allMeshes) yield {
    val meshPoints = mesh.pointSet.points.toIndexedSeq
    // transformations include scaling
    LandmarkRegistration.similarity3DLandmarkRegistration(meshPoints.zip(fGpaMeanMeshPoint), center = Point3D(0, 0, 0))
  }
  //full GPA fits
  val fGpaTransformedMeshes = allMeshes.zip(fullGpaTransforms).map (z=> z._1.transform(z._2))
  val fGpaTransformedMesheViews = fGpaTransformedMeshes.map(mesh => ui.show(fGpaGroup, mesh, "trMesh"))

  fGpaTransformedMesheViews(0).color = new Color(85,235,40)
  fGpaTransformedMesheViews(1).color = new Color(20,40,250)
  fGpaTransformedMesheViews(2).color = new Color(250,150,20)

Here are the views:

Fig. 4. full-GPA Mean mesh (red) and views of cross-sectional contours of the mean and the original meshes.
Fig. 5. The original meshes and their full-GPA fits.

Again, the centroid sizes can be calculated as:
original mesh 1: 487.222, mesh 1 full-GPA-transformed: 418.329
original mesh 2: 692.725, mesh 2 full-GPA-transformed: 417.958
original mesh 3: 566.668, mesh 3 full-GPA-transformed: 419.082
mean mesh: 421.707

As it can be observed, the original meshes are scaled in the case of the full GPA analysis to fit the mean mesh. The sizes of all the transformed and the mean mesh are close to each other, because, scaling is part of the transformations fitting the meshes onto a common mean. Also it should be noted that the (relative) scales of the meshes are not preserved.

Saving the mean meshes and the fits into files and closing the object:

MeshIO.writeMesh(meanMesh, new java.io.File(drive + "registration/Partial GPA/" + "meanMesh.stl"))
  (0 until 3).foreach(i => MeshIO.writeMesh(transformedMeshes(i), new java.io.File(drive + "registration/Partial GPA/" + s"mesh${i}.stl")))

  MeshIO.writeMesh(fGpaMeanMesh, new java.io.File(drive + "registration/Full GPA/" + "meanMesh.stl"))
  (0 until 3).foreach(i => MeshIO.writeMesh(fGpaTransformedMeshes(i), new java.io.File(drive + "registration/Full GPA/" + s"mesh${i}.stl")))

}

Full GPA and Shapes

Until this moment, I did not refer to a mesh/surface as a shape, because a shape has its own technical definition. The shape of an object/configuration/mesh is all its geometrical information modulo its scale, location, and rotational effects. In other words, shape is an equivalence class of geometrical objects modulo the similarity transformation. The full GPA removes the scale, location, and rotational effects of objects/meshes (relative to a mean object/mesh) in order to fit them all onto the mean mesh, which is the sample full Procrustes mean shape \hat\mu_F. Therefore, whatever geometrical variations that remain between the mean shape and each of the full Procrustes fits (full-GPA transformed meshes) reflect the variations due to the differences/dissimilarities in their shapes.

SSMTheory

Procrustes Analyses

Some definitions*

1) The Shape of an object is all the geometrical information that remains when, location, scale, and rotational effects are removed from and object. In other words, shape of an object is invariant under Euclidean similarity transformations, which are translation, isotropic scaling, and rotation.

2) The Form of an object is all the geometrical information that remains when location and rotational effect of an objects are removed. Form is also called size-and-shape. Form is invariant under rigid-body transformations. Here, we can say shape is the form with size removed (the size/scale information is removed).

3) Same form and same shape: two objects have the same form, size-and-shape, if they can be translated and rotated relative to each other so that they exactly match. Two objects have the same shape, if they can be translated, rotated and scaled relative to each other so that they exactly match.

The words matching, registration, superimposing and fitting are equivalent.

Fig. 1. Shape vs form.

4) A landmark is a point of correspondence on each object. e.g tip of the nose of all the dinosaurs in a population/sample.

5) Geometric shape analysis: An object can be (discretely) represented by a collection of landmark (point cloud), therefore, the landmarks coordinates retain (continue to have) the geometry of the point-cloud configuration. This approach to shape analysis is called the geometric shape analysis.

6) The Configuration is a collection (set) of landmarks on/of a particular object. The configuration of a m-D object with k landmarks can be represented by a k\times m matrix called the configuration matrix. For a 3D object we have:

\begin{bmatrix} x_1&y_1&z_1\\ x_2&y_2&z_2\\ x_3&y_3&z_3\\ \vdots& \vdots&\vdots\\ x_k&y_k&z_k\\ \end{bmatrix}

7) A size measure of a configuration is any positive real-valued function of the configuration matrix, X, such that \forall a\in \mathbb{R}\ s(aX)=as(X).

8) The centroid size is defined as the square root of the sum of squared Euclidean distances from each landmark to the centroid of a configuration:

S(X):=\sqrt{\sum_{i=1}^{k}{\|X_{i,}-\overline{X}\|^2}}\tag{1}

where X_{i,} is the i-th row of X, and \overline{X}:=\frac{1}{k}\sum_{i=1}^{k}{X_{i,}}\in \mathbb{R}^m,\ m=3\text{ for 3D}. \|.\| is the Euclidean vector norm.

The centroid size can be re-written using matrix algebra:

S(X)=\|CX\|\tag{2}

where C:=I_k-\frac{1}{k} 1_k 1_k^T with I_k being the k\times k identity matrix and 1_k\in\mathbb{R}^{k\times 1} is called the centering matrix, and \|.\| is the Frobenius norm, sometimes also called the Euclidean norm, of a matrix, i.e.
\|A\|:=\sqrt{trace(A^\text{T}A)}. If the centroid of X is already located at the origin, then C=I_k.

9) The Euclidean similarity transformations of a configuration matrix X are the set of translated, rotated, and isotropically scaled X. In a mathematical notation:

\{\beta X \Gamma + 1_k\gamma^{\text T}:\beta\in \mathbb{R}^+, \Gamma\in SO(m),\gamma\in \mathbb{R}^m\}\tag{3}

where, \beta \in\mathbb{R}^{+} is the scale (factor), SO(m) is the space of rotational matrices, \gamma\in\mathbb{R}^m is the translation vector. Vectors are considered as column vectors. 1_k\in\mathbb{R}^k is a column vector of ones. m=3 for 3D.

10) Rigid-body transformations of a configuration matrix X are the set of translated and rotated X. In a mathematical notation:

\{X \Gamma + 1_k\gamma^{\text T}:\beta\in \mathbb{R}^+, \Gamma\in SO(m),\gamma\in \mathbb{R}^m\}\tag{4}

11) The centered pre-shape of a configuration matrix X is obtained (defined) by removing the location and scale/size information of the configuration. Location can be removed by centering the configuration, i.e. translating the configuration by moving its centroid to the origin. Therefore, the translated configuration is X_C:=CX. The size is removed by scaling the configuration by its size defined as Eq. 2. Therefore, the pre-shape of X is obtained as:

Z_C:=\frac{CX}{\|CX\|}=\frac{CX}{\|X_C\|}\tag{5}

Removing size or scale means scaling to the unit size. Thereby, two configurations with different sizes lose their size information and also their scale. The scale is a quantity that expresses the ratio between sizes of two configurations, either two with different shapes or two with the same shape (one is the scaled version of the other one). In another scenario (like the full Procrustes analyses), if two or more configuration are scaled with different scale (factors), they also lose their scale/size information (relative to each other) although their sizes may not be unit.

12) Reworded definition of a shape: A shape is an equivalence class of geometrical figures/objects/configurations modulo (what remains after) the similarity transformation. The shape of X is denoted by [X]. In order to visualise the shape of an configuration, a representative of the equivalence class is considered and called icon, denoted by [X].

Two configurations X\in\mathbb{R}^{k\times m} and X'\in\mathbb{R}^{k\times m} have the same shape, i.e. [X]=[X'] iff there exist \beta,\Gamma,\text{ and }\gamma such that:

X’=\beta X \Gamma + 1_k\gamma^{\text T}

In other words, two configurations have the same shape if they perfectly match after the removal of their location, rotation, and size/scale.

Fig. 2. An equivalence class of configurations/objects modulo the similarity transformation and its icon. All configurations with the same shape are in the same class/set.

13) Shape space is the space/set of all shapes, i.e. equivalence classes. For example \{[X_1], [X_2] , [X_3],...\}. All have their locations, rotations, and size removed.

14) Reworded definition of form (size-and-shape): form is an equivalence class of geometrical configurations modulo translation and rotation. The form/size-and-shape of a configuration/object X is denoted by [X]_S, i.e an icon for the class. Two configuration X\in\mathbb{R}^{k\times m} and X'\in\mathbb{R}^{k\times m} have the same form, i.e. [X]_S=[X']_S iff there exist \Gamma\text{ and }\gamma such that:

X’=\Gamma X + 1_k\gamma^{\text T}
Fig. 3. An equivalence class of configurations/objects modulo rotation and translation and its icon. All configurations with the same form are in the same class/set.

15) Size-and-Shape/Form space is the space/set of all forms, i.e. equivalence classes. For example \{[X_1]_S, [X_2]_S , [X_3]_S,...\}.

16) By removing the size of a form (scaling to unit centroid size) of a configuration, the shape of the configuration is obtained.

17) Clarification on the term “removing”: let’s imagine a set of n configurations (n\ge 1). We want to remove the scale, location, and rotation(al) information of the configurations. Removal of scale/size is to scale all the configurations to unit size (each divided by for example its centroid size).

Removal of location is to translate all the configurations to a particular target point/location in the space by selecting the same single landmark or point of reference (like the centroid) of each configuration and (rigidly) translating the configuration along the vector from the point of reference to the target point. The target point can be the origin of the coordinate system/frame of reference.

Removal of rotation of the configurations is through optimising over rotations by minimising some criterion. For instance, to remove the rotations (rotational information) of two configurations with their locations (and maybe scales/sizes) already removed, it is needed to apply a particular relative rotation on them (e.g. keep one stationary and rotate the other one) so that they get closest, with respect to some definition of distance, as possible. Once the rotation is removed, then any deviation of one configuration from the other one originates from their locations, scales/sizes, and their shapes. To compare shapes, we also have to remove the locations and scales of the configurations. Then any deviation is due to the shape of the configurations. If only location and rotational information are removed, then any relative deviation originates from the sizes and shapes (form) of the configurations.

Matching two configurations and measure of distance between shapes

Consider a collection/set of shapes, as defined above, all having the same number of landmarks and the landmarks are in one-to-one correspondence. An important question is how to define a distance between two shapes in the set. This is natural to define a measure between the members of a set. The notion of shape distance indicates the difference between two shapes, i.e. how far two shapes are far from each other in the shape space.

A natural approach is to match/fit/superimpose/register a configuration X to a configuration Y (same number of landmarks and in correspondence) using the similarity transformations applied on X, and then the magnitude of difference between the fitted configuration of X, i.e. \widehat{X} and Y indicates the magnitude of difference in shape between the two configurations X and Y.

Fig. 5. Configuration X transforms up to similarity transformation to get fitted onto configuration Y.

So how to match two configuration? The full ordinary Procrustes analysis (FOPA) is the method to match/register/superimpose/fit two (m-D) configurations with the same number of landmarks as closely as possible onto each other. This method applies the similarity transformations to one of the configurations to closely match the other one. But what is the measure of closeness? The definition of the method makes it perspicuous:

The method of FOPA matches two configurations through transforming one onto the other one (target configuration) up to the similarity transformations and minimizing the squared Frobenius/Euclidean norm of the difference between the transforming and the target configurations. This means to minimize the following expression:

D_{FOPA}^2(X_1,X_2):=\|X_2-(\beta X_1\Gamma+1_k\gamma^\text{T})\|^2\tag{6}

The minimum value of D_{FOPA} is a measure of similarity/dissimilarity. Observe that X_1 is the transforming configuration (input) and X_2 is the target configuration. Also, all the similarity transformations are involved in the registration; this is why the term full is used. In order to find the matching configuration, the similarity parameters should be determined, i.e.:

(\hat\beta, \hat\Gamma, \hat\gamma)=\argmin_{\beta,\Gamma,\gamma}\|X_2-\beta X_1\Gamma-1_k\gamma^\text{T}\|^2\tag{7}

It is good to know that \hat\gamma=0 if the configurations are already centered.

The full Procrustes fit of X_1 onto X_2 is then:

X_1^{FP}=\hat\beta X_1\hat\Gamma+1_k\hat\gamma^\text{T}\tag{8}

It is tempting to chose the minimum of Eq. 6 as a measure of distance between the shapes of the two configurations. However,  min\ D_{FOPA}^2(X_1,X_2)\neq min\ D_{FOPA}^2(X_2,X_1), i.e. not commutative, unless the configurations both have the same size; for example having the unit size by getting pre-scaled. Therefore, the full Procrustes distance, d_F between two shapes (of two configurations) is defined as:

d_F^2(X_1,X_2):=\min\ D_{FOPA}^2(\frac{X_1}{\|X_1\|},\frac{X_2}{\|X_2\|})\tag{9}

FOPA involves isotropic scaling of the input configuration, in addition to the other two similarity transformations, to register it onto the target configuration. This means that the fitted configuration does not have the same size/scale as its original configuration. Therefore, FOPA removes the size of the input configuration.

Another scenario of fitting the input configuration to the target one is to register while preserving the size/scale of the input, i.e. not to scale the input (the scale of the target is also preserved). In this regard, partial ordinary Procrustes analysis (POPA) is a method of registration only over translation and rotation to match two configurations. This involves the minimization of:

D_{POPA}^2(X_1,X_2):=\|X_2-(X_1\Gamma+1_k\gamma^\text{T})\|^2\tag{10}

The partial Procrustes fit of X_1 onto X_2 is then: (see Eq. 7 with \beta=1):

X_1^{PP}=X_1\hat\Gamma+1_k\hat\gamma^\text{T}\tag{11}

Based on POPA, the partial Procrustes distance can be defined. If the two configurations have the same size, like by getting pre-scaled to the unit size, i.e. the size is removed, then the following is defined as the the partial Procrustes distance between two shapes (of two configurations) d_P(X_1,X_2):

d_P^2(X_1,X_2):=\min\ D_{POPA}^2(\frac{X_1}{\|X_1\|},\frac{X_2}{\|X_2\|})\tag{12}

Calculating POPA distance, unlike the FOPA distance, does not involve additional scaling of the the (pre-scaled) unit sized configurations, i.e. \frac{X_1}{\|X_1\|},\frac{X_2}{\|X_2\|} in the minimization of D_{POPA}^2. Nevertheless, obtaining POPA distance still entails removing the scale of the configurations.

Remark: the two defined distances have different values.

Mean shape

First some motivation from the arithmetic mean of numbers. Every dinosaur knows that the arithmetic mean/average of n real numbers is defined as: \hat\mu:=\frac{1}{n} \sum_{1}^{n} x_i. Let’s define a function as g(\mu):=(\frac{1}{n} \sum_{1}^{n} x_i)-\mu. In other words:

g(\mu):=\frac{1}{n} \sum_{i=1}^{n} (x_i-\mu)

It is now obvious that if we let \mu=\hat\mu then g(\mu)=0. This is what we want out of defining the function. However, zero is not the minimum of f(\mu) as this function is an absolutely decreasing function, hence no local or global minimum (just take the derivative w.r.t \mu). Let’s define a function as:

f(\mu):=\frac{1}{n} \sum_{i=1}^{n} (x_i-\mu)^2

What minimizes f(\mu)? Solving \frac{\mathrm d}{\mathrm d \mu}f(\mu)=0, we get:

\frac{\mathrm d}{\mathrm d \mu}f(\mu)=\frac{1}{n} \sum_{i=1}^{n} -2(x_i-\mu)=0 \newline \implies \frac{1}{n} \sum_{i=1}^{n} (x_i-\mu)=0

This is just g(\mu)=0 which has the answer \hat\mu. The second derivative of f(\mu) is \frac{\mathrm d^2}{\mathrm d \mu^2}f(\mu)=2>0 which indicates that \hat\mu minimizes f(\mu).

Using the Euclidean distance/metric/norm of real numbers, i.e. d_E(x_1,x_2):=|x_1-x_2| with |.| being the absolute value, we can re-define f(\mu) as:

f(\mu):=\frac{1}{n} \sum_{i=1}^{n} |x_i-\mu|^2=\frac{1}{n} \sum_{1}^{n} d_{E}^2(x_i,\mu)

This results in (it can be easily proved by equating the derivative with zero):

\hat\mu=\argmin_{\mu}\frac{1}{n} \sum_{i=1}^{n} d_{E}^2(x_i,\mu)

This motivation suggests that for a set of objects with a defined distance function/metric, a mean/average object can be imagined. This average object is such an object that minimizes the average sum of squared distances from all objects to the average object. The Mean shape of a set of shapes sampled from a population of shapes can be defined using the distance functions as defined by Eq. 9 and Eq. 12:

The sample full Procrustes mean shape is defined as:

\hat\mu_F:=\arg\inf_{\mu}\frac{1}{n} \sum_{i=1}^{n} d_{F}^2(X_i,\mu)\tag{13}

The sample partial Procrustes mean shape is defined as:

\hat\mu_P:=\arg\inf_{\mu}\frac{1}{n} \sum_{i=1}^{n} d_{P}^2(X_i,\mu)\tag{14}

In words, the mean shape is a shape with minimum distance from each configuration’s shape.

Note that in both definitions, the configurations will be pre-scaled to the unit size (due to the definition of the metrics).

Remark: the mean shape refers to the mean of the shapes not the mean of configurations.

Matching more than two configurations: generalized Procrustes analyses

The term FOPA/POPA refers to Procrustes matching of one configuration
onto another. If there are more than two configurations, it is natural to register/match them all to a mean shape or mean form. To do so, generalized Procrustes analysis (GPA) is utilized to superimpose two or more configurations onto a common unknown mean shape/form. GPA has two methods: full and partial GPA’s. The GPA provides a practical method of computing the sample mean shape defined by Eq. 13 or 14. Unlike the full/partial OPA, the output of GPA (the Procrustes fits) does not depend on the order of superimposition.

Full GPA

The method of full GPA transforms n\geq 2 configurations up to the similarity transformations (scaling + rotation + translation) with respect to a common unknown mean shape [\mu] in order to minimise the following total sum of squares with respect to the transformation parameters and \mu, while imposing a size constraint to avoid a trivial solution. A trivial solution can be all \beta_i close to zero (in order not to make a configuration vanished, scale is not allowed to be zero) and no translation; hence the mean shape will be something with size close to zero, i.e almost vanished.

G(X_1,…X_n):=\sum_{i=1}^{n} \|\beta_iX_i\Gamma_i+1_k\gamma_i^\text{T}-\mu\|^2\tag{15} \newline \sum_{i=1}^{n}S^2(\beta_iX_i\Gamma_i+1_k\gamma_i^\text{T})=\sum_{i=1}^{n}S^2(X_i)

In the above expression, \beta_i,\Gamma_i,\gamma_i, \text{ and } \mu are unknown. Once the expression is solved, the estimate of the optimum/minimizing parameters, \hat\beta_i,\hat\Gamma_i\,\hat\gamma_i, and estimate of the mean shape, \hat\mu are found. Note that if we already knew the population mean shape, we would just align each configuration separately to that mean shape.

Once the above minimization is solved, the full (generalized) Procrustes fit of each configuration (onto the estimated mean) is:

X_i^{FP}=\hat\beta_iX_i\hat\Gamma_i+1_k\hat\gamma_i^\text{T}\tag{16}

Note that the scale \beta_i of each transformed configuration (full Procrustes fit) is not necessarily the same. Therefore, full GPA does not preserve the scale of the configurations through their optimal transformations. This means that if one configuration is \alpha times larger than the other one, they will be \beta times of each other after the optimal transformation and getting fitted to the mean shape.

Assuming that \hat\beta_i,\hat\Gamma_i\,\hat\gamma_i,\hat\mu minimize expression 15, we can write:

\hat\mu=\argmin_{\mu}\sum_{i=1}^{n} \|\hat\beta_iX_i\hat\Gamma_i+1_k\hat\gamma_i^\text{T}-\mu\|^2\text{ or}\tag{17} \newline \hat\mu=\argmin_{\mu}\sum_{i=1}^{n} \|X_i^{FP}-\mu\|^2

The solution is then (see the proof here):

\hat\mu=\frac{1}{n}\sum_{i=1}^{n} X_i^{FP}\tag{18}

which is the arithmetic mean of the Procrustes fits. This means that once the configurations are registered to the estimate of the unknown mean, their arithmetic mean/average equals the estimated mean.

Now we want to show that the estimated mean shape \hat\mu=\frac{1}{n}\sum_{i=1}^{n} X_i^{FP} has the same shape as the sample full Procrustes mean shape defined by Eq. 13. Without loss of generality, we pre-scale the configurations X_i to the unit size, and replace the size constraint with S^2(\mu)=1.

Since Eq. 15 is some of positive reals, the optimum transformation parameters should also minimize each term in the sum. This means that \hat\beta_i,\hat\Gamma_i\,\hat\gamma_i minimizes each \|\beta_iX_i\Gamma_i+1_k\gamma_i^\text{T}-\mu\|^2. Therefore, following Eq. 13, the sample full Procrustes mean shape of the full Procrustes fit is:

\hat\mu_F:=\arg\inf_{\mu}\frac{1}{n} \sum_{i=1}^{n}\min_{\beta_i,\Gamma_i,\gamma_i}\|(\beta_i X_i\Gamma_i+1_k\gamma_i^\text{T})-\mu\|^2 \newline =\arg\inf_{\mu}\frac{1}{n} \sum_{i=1}^{n}\|(\hat\beta_i X_i\hat\Gamma_i+1_k\hat\gamma_i^\text{T})-\mu\|^2 \newline=(n^{-1})\arg\inf_{\mu}\sum_{i=1}^{n}\|X_i^{FP}-\mu\|^2

Therefore, (by Eq. 16 and 17), \hat\mu_F= n^{-2}\frac{1}{n}\sum_{i=1}^{n} X_i^{FP}. These two have the same shape, which means there is a similarity transformation to perfectly superimpose them on each other. Here, it is obvious that only scaling by the factor of n^{-2} is needed.

Estimating the transformation parameters of full GPA method in order to estimate the sample full Procrustes mean shape and registering all configurations to the estimated mean shape, is then equivalent to the following minimization problem:

\inf_{\beta_i,\Gamma_i,\gamma_i}\sum_{i=1}^{n} \|(\beta_i X_i\Gamma_i+1_k\gamma_i^\text{T}) – \frac{1}{n} \sum_{j=1}^{n} (\beta_j X_j\Gamma_j+1_k\gamma_j^\text{T})\|^2\\ \sum_{i=1}^{n}S^2(\beta_iX_i\Gamma_i+1_k\gamma_i^\text{T})=\sum_{i=1}^{n}S^2(X_i)

It can be proved that above is equal to (See the proof here):

\inf_{\beta_i,\Gamma_i,\gamma_i}\frac{1}{n}\sum_{i=1}^{n} \sum_{\underset{{j\lt n}}{j=i+1}}^{n}\|(\beta_i X_i\Gamma_i+1_k\gamma_i^\text{T}) – (\beta_j X_j\Gamma_j+1_k\gamma_j^\text{T})\|^2\\ \newline \sum_{i=1}^{n}S^2(\beta_iX_i\Gamma_i+1_k\gamma_i^\text{T})=\sum_{i=1}^{n}S^2(X_i)

Above means that the full GPA minimizes the sum of pairwise squared Frobenius distances between all the transformed versions of the configurations ( (full Procrustes fits) in the sample. It should be noted that, the output of the full GPA is configurations that are translated, rotated, and scaled versions of the original configurations. Hence, the scale, location, and rotational information of each configuration is removed. The output configurations are registered onto each other such that the sum of pairwise squared Frobenius distances between all of them is minimized. If the scale of each output configuration is removed by re-scaling it to unit size, then full Procrustes distance between each two shapes (of the output configurations), and also the estimated mean shape (once re-scaled to unit size) can be calculated.

Remark: the full GPA does not lead to the minimized (squared) Frobenius distance between two full Procrustes fits , but it leads to the minimized sum of the Frobenius distances of each full Procrustes fit and the rest of them in the set. It does minimize the (squared) Frobenius distance between each full Procrustes fit and the (full Procrustes) mean shape though.

Remark: The full GPA removes the scale of the configurations by scaling them in order to make them closer to the mean shape. The scales are not the same for all the configurations and hence the full GPA does not preserve the (original) scale (relative size) of the configurations. The full Procrustes fits and the estimated mean shape may not have the unit size. However, they are relatively scaled in a way to minimize the sum of squared distances. This is the notion of removing the scale information (of the original configurations). Therefore, the Procrustes fits represent the shape (information) of the configurations and the differences between each of them (between corresponding landmarks) and the mean shape is purely due to their shape variations.

In summary, the full GPA of configurations leads to a mean shape and full Procrustes fits such that each full Procrustes fit is superimposed onto the mean shape through the similarity transformations; hence, whatever geometrical variations/discrepancies remains between each full Procrustes fit and the mean shape is purely due to their shapes.

An algorithm for full GPA is as follows:

1- Translations/Removing locations: remove the location of each configuration by centring each configuration (each configuration is translated to the origin by its centroid) and initially let each Procrustes fits be:

X_i^{FP}= CX_i\text{ for } i=1,…,n

2- estimate \hat\mu as \frac{1}{n}\sum_{i=1}^{n} X_i^{FP}.

3- Rotations and scales: perform a FOPA on each centred configuration X_i^{FP} to rotate and scale it to \hat\mu. Now each recently transformed configuration, X_i^{*FP} , is fitted to the estimated mean.

{X}_i^{*FP}=\hat\beta_i X_i^{FP} \hat\Gamma_i

\hat\beta_i and \hat\Gamma_i are transformation parameters out of the FOPA. Note that the rotation is about the origin, and no translation is needed as the locations are already removed.

4- Update the mean shape: \hat\mu=\frac{1}{n}\sum_{i=1}^{n} {X}_i^{*FP}.

5- let X_i^{FP}={X}_i^{*FP}. Re-centring is not necessary because the centroid of each configuration at this stage is already located at the origin and the rotations are also about the origin. Re-scaling won’t change the location of the centroid because it is isotropic.

6- Repeat 3 to 5 until everyone’s happy, i.e. the change in the Procrustes sum of squares (\sum_{i=1}^{n} \|\beta_iX_i\Gamma_i+1_k\gamma_i^\text{T}-\mu\|^2) from one iteration to the next one becomes less than a particular tolerance.

Remark: The configurations can be pre-scaled to have the unit size but it is not necessary because their size is removed once they get scaled to better fit the estimated mean at each iteration.

Partial GPA

Partial GPA involves merely registration of a set of configurations by translation and rotation (not scaling). Partial GPA preserves the scale/size of the configurations as opposed to full GPA. This method is appropriate for analysis of forms, i.e. size-and-shapes. Partial GPA is through minimization of the following sum with respect to the transformation parameters (\Gamma_i, \gamma_i) and and unknown form \mu. Size constraint is not needed as there is no scaling involved.

G_P(X_1,…X_n):=\sum_{i=1}^{n} \|X_i\Gamma_i+1_k\gamma_i^\text{T}-\mu\|^2\tag{19}

Once the above minimization is solved, the partial Procrustes fit of each configuration onto the common form is:

X_i^{PP}=X_i\hat\Gamma_i+1_k\hat\gamma_i^\text{T}\tag{20}

Similar to Eq. 17 and 18, we can write:

\hat\mu=\frac{1}{n}\sum_{i=1}^{n} X_i^{PP}\tag{21}

However, \hat\mu is not going to have the same shape as the mean shapes previously defined (sample full/partial Procrustes mean shape). \hat\mu is then a form that its Frobenius distance to each partial Procrustes fit is minimum. I and some dinosaur think that this partial GPA is appropriate to group-register a set of configurations onto each other, however, we don not know its application at this moment.

The algorithm for partial GPA is the same as that of the full GPA except that the scale is removed i.e. set as 1.

* The definitions follow the book “statistical shape analysis, Ian L. Dryden, Kanti V. Mardia”, and the article “Procrustes Methods in the Statistical Analysis of Shape, Colin Goodall”. I may not have paraphrased them all, hence, some sentences may be subjected to copy right and they are not intended to be reused in official publications.

SSM Using Scalismo (1): Rigid alignment

Hereby, I am going to use Scalismo to rigidly register/align/fit objects represented by triangular meshes onto a particular object. The alignment is based on partial ordinary Procrustes analysis method (POPA). POPA involves translation and rotation (not scaling) of an object with respect to another in order to align/fit the transforming object to the stationary one as close as possible. Read about POPA here. This initial alignment makes the non-rigid registration easier and faster later.

There are 3 geometries of femoral heads in the form of STL and I select one as the reference object (reference mesh) and I will make the other 2 aligned/registered on the reference one.

Fig.1. 3 femoral heads.

For each shape, I specified 3 anatomically-chosen landmarks (LMs) and had their coordinates (x,y,z) listed in separate CSV files. For example, one file contains:
51.5378,-65.1603,-33.8293
44.1069,-66.5612,-26.1846
56.3146,-67.5174,-30.7776

Remark: Scalismo recognizes/identifies a landmark by its ID (name). This means that landmarks pointing the same anatomical point/feature on different objects should have the same ID (although different coordinates).

I already set up Scala enviroment. A piece of code can be run by lines in Scala console (Ctrl+Shift+X in IntelliJ); in this way, use Ctrl+Enter for I/O in the console environment. The whole object can be rum by extending the App trait.

This post contains code blocks; if your browser does not automatically display the code blocks, please click on the displayed dark narrow rectangular areas to display the containing code.

object InitialRR extends App {
  //To send the code selection to Scala Console, select the code in the Editor, press Ctrl + Shift + X
  //imports
  import scalismo.geometry._
  import scalismo.common._
  import scalismo.ui.api._
  import scalismo.mesh._
  import scalismo.registration.LandmarkRegistration
  import scalismo.io.{MeshIO, LandmarkIO}
  import scalismo.numerics.UniformMeshSampler3D
  import breeze.linalg.{DenseMatrix, DenseVector, csvread}
  import scalismo.registration.{Transformation, RotationTransform, TranslationTransform, RigidTransformation}
  import scala.io.StdIn.{readLine, readInt}

  scalismo.initialize()
  implicit val rng = scalismo.utils.Random(42)
  // ui
  val ui = ScalismoUI()
  //------------------------------------------------------------------------------------------------------------------//
  // This is a function to remove a group view by asking the order from the user
  def removeGroupView(groupView:scalismo.ui.api.Group) : Unit = {
    val s = readLine(s"remove ${groupView.name}  [y/n]? ")
    if (s == "y") groupView.remove()
  }

  // [Loading stl and LM files]
  val stlDir =  "I:/registration/SampleData/Meshes/"
  val LMDir =   "I:/registration/SampleData/LMs/"

  val meshFiles = new java.io.File(stlDir).listFiles()
  meshFiles.foreach(file => println(s"${file}\n"))

  val numOfMeshes = meshFiles.size
  println(s"number of files is ${numOfMeshes}")

  val originalData = ui.createGroup("Original Data")
  var i: Int = 1
  val (meshes, meshViews) = meshFiles.map(meshFile => {
    val mesh = MeshIO.readMesh(meshFile).get
    val meshView = ui.show(originalData, mesh, "mesh" + i)
    i = i + 1
    (mesh, meshView) // return a tuple of the mesh and the associated view
  }).unzip // take the tuples apart, to get a sequence of meshes and meshViews
  //meshViews(0).remove()
  removeGroupView(originalData)
   //------------------------------------------------------------------------------------------------------------------//
  // [removing centroid of each mesh i.e moving the centroid to (0,0,0), and hence the mesh by the same translation]

  val MeshPoints = meshes.map(mesh => (mesh.pointSet.points).toArray) //iterator to array

  def centroid(TriMeshPoints: Array[scalismo.geometry.Point[scalismo.geometry._3D]]): EuclideanVector[_3D] = {
    val vecrdPoints = TriMeshPoints.map { point => point.toVector }
    (vecrdPoints.reduce((v1, v2) => v1 + v2)) / TriMeshPoints.size //centroid
  }

  //finding centroids
  val MeshCentroids = MeshPoints.map(mesh => centroid(mesh))
  // translating meshes to the origin i.e making the centroid equal to the origin
  val translations = MeshCentroids.map(centVec => TranslationTransform[_3D](centVec.*(-1)))
  val translatedMeshes = (0 until numOfMeshes).map(i => meshes(i).transform(translations(i)))
  val zeroCentData = ui.createGroup("ZeroCentData")
  val translatedMeshViews = (0 until numOfMeshes).map(i => ui.show(zeroCentData, translatedMeshes(i), "mesh" + i))

  //selecting a RefMesh
  val refMesh = translatedMeshes(0)

  //------------------------------------------------------------------------------------------------------------------//
  //[generating landmark Objcs of each mesh]
  //reading landmark coordinates from files associated with data set files
  val LmFiles = new java.io.File(LMDir).listFiles
  LmFiles.foreach(file => println(s"${file}\n"))

  val LmCrdnts: Array[DenseMatrix[Double]] = LmFiles.map(file => csvread(file, separator=',', skipLines=0))

  // - creating point objects from the lm crdnts.A vector of vectors; each vector contains translated pointObjs of each set of Lms in a file

  val LmPointObjs = (0 until numOfMeshes).map(i =>{
    (0 until LmCrdnts(i).rows).map(row =>
      translations(i)(Point( LmCrdnts(i)(row,0), LmCrdnts(i)(row,1),LmCrdnts(i)(row,2))))  // translating the pointObjs by the translations that move the meshes to their centroids
  })

  val LMs = LmPointObjs.map(eachSet => {
    var n = 0
    eachSet.map(pointObj => {n = n+1; Landmark(s"lm-${n}", pointObj)}) //Ids' names are important for registeration
  }) // a vector of vectors of landmarks. Each vector contains LmObjs of each set of Lms in a file
  val LMviews = LMs.map (eachSet => {
    eachSet.map(lm => ui.show(zeroCentData,lm,lm.id))
  })
  removeGroupView(zeroCentData)
  // -----------------------------------------------------------------------------------------------------------------//
  // [Rigid registration based on landmarks. Rigid (Procrustes) alignment of each mesh to the RefMesh]
  // RefLMs = LMs(0)
  val bestTransform  = (1 until numOfMeshes).map (i => LandmarkRegistration.rigid3DLandmarkRegistration(LMs(i) ,LMs(0), center = Point(0, 0, 0)))
  //rigid3DLandmarkRegistration pares the corresponding landmarks based on their Ids (names)

  //------------------------------------------------------------------------------------------------------------------//
  // [Transforming the landmarks and the meshes onto the RefMesh based on the bestTransform]
  val alignedData = ui.createGroup("alignedData")
  //transforming LMs
  val rigidTransformedLms = (0 until  numOfMeshes-1).map (i => {
    LMs(i+1).map (lm => lm.transform(bestTransform(i)))  //(i+1) as the RefLms stay where they are
  })

  val rigidTransformedLmsviews = rigidTransformedLms.map (eachSet => {
    eachSet.map(lm => ui.show(alignedData,lm,lm.id))
  })
  val RefLmsView = LMs(0).map(lm => ui.show(alignedData,lm,lm.id))

  //transforming Meshes
  val alignedMeshes = (0 until numOfMeshes-1).map(i => translatedMeshes(i+1).transform(bestTransform(i)))
  // showing the meshes
  val alignedMeshesViews = (0 until numOfMeshes-1).map(i => ui.show(alignedData, alignedMeshes(i), "Amesh" + i))
  val RefMeshesView = ui.show(alignedData, refMesh, "RefMesh")

  //------------------------------------------------------------------------------------------------------------------//
  //Saving aligned mesh and LMs to new files
  val outputMeshDir = drive + "I:/registeration/AlignedData/Meshes/"
  val outputLmDir = drive + "I:/registeration/AlignedData/LMs/"

  MeshIO.writeMesh(refMesh, new java.io.File(outputMeshDir + "mesh0.stl"))
  (0 until numOfMeshes-1).foreach(i => MeshIO.writeMesh(alignedMeshes(i), new java.io.File(outputMeshDir + s"mesh${i+1}.stl")))


  LandmarkIO.writeLandmarksJson[_3D](LMs(0), new java.io.File(outputLmDir + "LM0.json"))
  (0 until numOfMeshes-1).foreach(i => LandmarkIO.writeLandmarksJson[_3D](rigidTransformedLms(i), new java.io.File(outputLmDir + s"LM${i+1}.json")))
  // asking for closeing the ui
  val finito = readLine("finito [y/n]? ")
  if (finito == "y") ui.close()
}

Here is what the code does, POPA has rigidly registered two femoral heads onto the one selected as the reference.