volume¶
Submodule extending the vtkVolume
object functionality.
Volume¶

class
vedo.volume.
Volume
(inputobj=None, c='RdBu_r', alpha=(0.0, 0.0, 0.2, 0.4, 0.8, 1.0), alphaGradient=None, alphaUnit=1, mode=0, shade=False, spacing=None, dims=None, origin=None, mapper='smart')[source]¶ Bases:
vtkRenderingCorePython.vtkVolume
,vedo.base.BaseGrid
Derived class of
vtkVolume
. Can be initialized with a numpy object, avtkImageData
or a list of 2D bmp files.See e.g.: numpy2volume1.py
 Parameters
c (list,str) – sets colors along the scalar range, or a matplotlib color map name
alphas (float,list) – sets transparencies along the scalar range
alphaUnit (float) – low values make composite rendering look brighter and denser
origin (list) – set volume origin coordinates
spacing (list) – voxel dimensions in x, y and z.
dims (list) – specify the dimensions of the volume.
mapper (str) – either ‘gpu’, ‘opengl_gpu’, ‘fixed’ or ‘smart’
mode (int) –
define the volumetric rendering style:
0, composite rendering
1, maximum projection rendering
2, minimum projection
3, average projection
4, additive mode
The default mode is “composite” where the scalar values are sampled through the volume and composited in a fronttoback scheme through alpha blending. The final color and opacity is determined using the color and opacity transfer functions specified in alpha keyword.
Maximum and minimum intensity blend modes use the maximum and minimum scalar values, respectively, along the sampling ray. The final color and opacity is determined by passing the resultant value through the color and opacity transfer functions.
Additive blend mode accumulates scalar values by passing each value through the opacity transfer function and then adding up the product of the value and its opacity. In other words, the scalar values are scaled using the opacity transfer function and summed to derive the final color. Note that the resulting image is always grayscale i.e. aggregated values are not passed through the color transfer function. This is because the final value is a derived value and not a real data value along the sampling ray.
Average intensity blend mode works similar to the additive blend mode where the scalar values are multiplied by opacity calculated from the opacity transfer function and then added. The additional step here is to divide the sum by the number of samples taken through the volume. As is the case with the additive intensity projection, the final image will always be grayscale i.e. the aggregated values are not passed through the color transfer function.
Hint
if a list of values is used for alphas this is interpreted as a transfer function along the range of the scalar.

alphaGradient
(alphaGrad)[source]¶ Assign a set of tranparencies to a volume’s gradient along the range of the scalar value. A single constant value can also be assigned. The gradient function is used to decrease the opacity in the “flat” regions of the volume while maintaining the opacity at the boundaries between material types. The gradient is measured as the amount by which the intensity changes over unit distance.
The format for alphaGrad is the same as for method
volume.alpha()
.

append
(volumes, axis='z', preserveExtents=False)[source]¶ Take the components from multiple inputs and merges them into one output. Except for the append axis, all inputs must have the same extent. All inputs must have the same number of scalar components. The output has the same origin and spacing as the first input. The origin and spacing of all other inputs are ignored. All inputs must have the same scalar type.
 Parameters
preserveExtents (bool) – if True, the extent of the inputs is used to place the image in the output. The whole extent of the output is the union of the input whole extents. Any portion of the output not covered by the inputs is set to zero. The origin and spacing is taken from the first input.
from vedo import load, datadir vol = load(datadir+'embryo.tif') vol.append(vol, axis='x').show()

center
(center=None)[source]¶ Set/get the volume coordinates of its center. Position is reset to (0,0,0).

correlationWith
(vol2, dim=2)[source]¶ Find the correlation between two volumetric data sets. Keyword dim determines whether the correlation will be 3D, 2D or 1D. The default is a 2D Correlation. The output size will match the size of the first input. The second input is considered the correlation kernel.

crop
(left=None, right=None, back=None, front=None, bottom=None, top=None, VOI=())[source]¶ Crop a
Volume
object. Parameters
left (float) – fraction to crop from the left plane (negative x)
right (float) – fraction to crop from the right plane (positive x)
back (float) – fraction to crop from the back plane (negative y)
front (float) – fraction to crop from the front plane (positive y)
bottom (float) – fraction to crop from the bottom plane (negative z)
top (float) – fraction to crop from the top plane (positive z)
VOI (list) –
extract Volume Of Interest expressed in voxel numbers
Eg.: vol.crop(VOI=(xmin, xmax, ymin, ymax, zmin, zmax)) # all integers nrs

dilate
(neighbours=(2, 2, 2))[source]¶ Replace a voxel with the maximum over an ellipsoidal neighborhood of voxels. If neighbours of an axis is 1, no processing is done on that axis.
See example script: erode_dilate.py

erode
(neighbours=(2, 2, 2))[source]¶ Replace a voxel with the minimum over an ellipsoidal neighborhood of voxels. If neighbours of an axis is 1, no processing is done on that axis.
See example script: erode_dilate.py

euclideanDistance
(anisotropy=False, maxDistance=None)[source]¶ Implementation of the Euclidean DT (Distance Transform) using Saito’s algorithm. The distance map produced contains the square of the Euclidean distance values. The algorithm has a O(n^(D+1)) complexity over nxnx…xn images in D dimensions.
Check out also: https://en.wikipedia.org/wiki/Distance_transform
 Parameters
See example script: euclDist.py

frequencyPassFilter
(lowcutoff=None, highcutoff=None, order=1)[source]¶ Lowpass and highpass filtering become trivial in the frequency domain. A portion of the pixels/voxels are simply masked or attenuated. This function applies a high pass Butterworth filter that attenuates the frequency domain image with the function
The gradual attenuation of the filter is important. A simple highpass filter would simply mask a set of pixels in the frequency domain, but the abrupt transition would cause a ringing effect in the spatial domain.
 Parameters
Check out also this example:

gaussianSmooth
(sigma=(2, 2, 2), radius=None)[source]¶ Performs a convolution of the input Volume with a gaussian.

getDataArray
()[source]¶ Get readwrite access to voxels of a Volume object as a numpy array.
When you set values in the output image, you don’t want numpy to reallocate the array but instead set values in the existing array, so use the [:] operator. Example: arr[:] = arr*2 + 15
If the array is modified call:
volume.imagedata().GetPointData().GetScalars().Modified()
when all your modifications are completed.

jittering
(status=None)[source]¶ If jittering is True, each ray traversal direction will be perturbed slightly using a noisetexture to get rid of woodgrain effects.

medianSmooth
(neighbours=(2, 2, 2))[source]¶ Median filter that replaces each pixel with the median value from a rectangular neighborhood around that pixel.

mirror
(axis='x')[source]¶ Mirror flip along one of the cartesian axes.
Note
axis='n'
, will flip only mesh normals.

mode
(mode=None)[source]¶ Define the volumetric rendering style.
0, composite rendering
1, maximum projection rendering
2, minimum projection rendering
3, average projection rendering
4, additive mode
The default mode is “composite” where the scalar values are sampled through the volume and composited in a fronttoback scheme through alpha blending. The final color and opacity is determined using the color and opacity transfer functions specified in alpha keyword.
Maximum and minimum intensity blend modes use the maximum and minimum scalar values, respectively, along the sampling ray. The final color and opacity is determined by passing the resultant value through the color and opacity transfer functions.
Additive blend mode accumulates scalar values by passing each value through the opacity transfer function and then adding up the product of the value and its opacity. In other words, the scalar values are scaled using the opacity transfer function and summed to derive the final color. Note that the resulting image is always grayscale i.e. aggregated values are not passed through the color transfer function. This is because the final value is a derived value and not a real data value along the sampling ray.
Average intensity blend mode works similar to the additive blend mode where the scalar values are multiplied by opacity calculated from the opacity transfer function and then added. The additional step here is to divide the sum by the number of samples taken through the volume. As is the case with the additive intensity projection, the final image will always be grayscale i.e. the aggregated values are not passed through the color transfer function.

operation
(operation, volume2=None)[source]¶ Perform operations with
Volume
objects.volume2 can be a constant value.
Possible operations are:
+
,
,/
,1/x
,sin
,cos
,exp
,log
,abs
,**2
,sqrt
,min
,max
,atan
,atan2
,median
,mag
,dot
,gradient
,divergence
,laplacian
.

permuteAxes
(x, y, z)[source]¶ Reorder the axes of the Volume by specifying the input axes which are supposed to become the new X, Y, and Z.

resample
(newSpacing, interpolation=1)[source]¶ Resamples a
Volume
to be larger or smaller.This method modifies the spacing of the input. Linear interpolation is used to resample the data.

shade
(status=None)[source]¶ Set/Get the shading of a Volume. Shading can be further controlled with
volume.lighting()
method.If shading is turned on, the mapper may perform shading calculations. In some cases shading does not apply (for example, in maximum intensity projection mode).

slicePlane
(origin=(0, 0, 0), normal=(1, 1, 1))[source]¶ Extract the slice along a given plane position and normal.

threshold
(above=None, below=None, replace=None, replaceOut=None)[source]¶ Binary or continuous volume thresholding. Find the voxels that contain a value above/below the input values and replace them with a new value (default is 0).

toPoints
()[source]¶ Extract all image voxels as points. This function takes an input
Volume
and creates anMesh
that contains the points and the point attributes.See example script: vol2points.py
interpolateToVolume¶

vedo.volume.
interpolateToVolume
(mesh, kernel='shepard', radius=None, N=None, bounds=None, nullValue=None, dims=(25, 25, 25))[source]¶ Generate a
Volume
by interpolating a scalar or vector field which is only known on a scattered set of points or mesh. Available interpolation kernels are: shepard, gaussian, or linear. Parameters
mesh2Volume¶
signedDistanceFromPointCloud¶
volumeFromMesh¶

vedo.volume.
volumeFromMesh
(mesh, bounds=None, dims=(20, 20, 20), signed=True, negate=False)[source]¶ Compute signed distances over a volume from an input mesh. The output is a
Volume
object whose voxels contains the signed distance from the mesh. Parameters
See example script: volumeFromMesh.py