diff --git a/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/globalmacromodules.md b/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/globalmacromodules.md index b9a93eddb..2a276a890 100644 --- a/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/globalmacromodules.md +++ b/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/globalmacromodules.md @@ -79,7 +79,11 @@ Make sure to choose *Directory Structure* as *self-contained*. This ensures that Click Next > to edit further properties. You have the opportunity to directly define the internal network of the macro module, for example, by copying an existing network. In this case, we could copy the network of the local macro module `Filter` we already created. In addition, you have the opportunity to directly create a Python file. Python scripting can be used for the implementation of module interactions and other module functionalities. More information about Python scripting can be found [here](./tutorials/basicmechanisms/macromodules/pythonscripting). -{{< imagegallery 2 "images" "ProjectWizard1" "ProjectWizard2" >}} +{{< imagegallery 2 + "images" + "ProjectWizard1|Module properties" + "ProjectWizard2|Macro module properties" +>}} ## Structure of Global Macro Modules After creating your global macro module, you can find the created project *MyProject* in your package. This project contains your macro module `Filter`. For the macro module exist three files: diff --git a/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/itemmodelview.md b/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/itemmodelview.md index 2fffaa6f0..fcdc071a3 100644 --- a/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/itemmodelview.md +++ b/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/itemmodelview.md @@ -35,7 +35,12 @@ We can leave the *Fields* empty for now. We can add them in the *.script* file. Click *Create* {{< mousebutton "left" >}}. -{{< imagegallery 3 "images/tutorials/basicmechanics" "ItemModel_1" "ItemModel_2" "ItemModel_3">}} +{{< imagegallery 3 + "images/tutorials/basicmechanics" + "ItemModel_1|Module properties" + "ItemModel_2|Macro module properties" + "ItemModel_3|Module field interface" +>}} If you cannot find your module via *Module Search*, reload module cache by clicking the menu item {{< menuitem "Extras" "Reload Module Database (Clear Cache)" >}} diff --git a/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/pythondebugger.md b/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/pythondebugger.md index 7adc16764..18257bdb0 100644 --- a/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/pythondebugger.md +++ b/mevislab.github.io/content/tutorials/basicmechanisms/macromodules/pythondebugger.md @@ -121,7 +121,11 @@ Now, the code execution is only stopped if you copy the tag name *SOPClassUID*. ## Evaluate Expression The *Evaluate Expression* tab allows you to modify variables during execution. In our example, you can set the result item.text(1) to something like item.setText(1, "Hello"). If you now step to the next line via {{< keyboard "F10" >}}, your watched value shows *"Hello"* instead of *"SOPClassUID"*. -{{< imagegallery 2 "images/tutorials/basicmechanics" "Debug9" "Debug9a" >}} +{{< imagegallery 2 + "images/tutorials/basicmechanics" + "Debug9|Evaluate expression" + "Debug9a|Watches" +>}} ## Summary * MATE allows debugging of any Python files including files predefined in MeVisLab. diff --git a/mevislab.github.io/content/tutorials/dataobjects/contourobjects.md b/mevislab.github.io/content/tutorials/dataobjects/contourobjects.md index 9b5e9c1c4..d290870dc 100644 --- a/mevislab.github.io/content/tutorials/dataobjects/contourobjects.md +++ b/mevislab.github.io/content/tutorials/dataobjects/contourobjects.md @@ -37,7 +37,20 @@ As mentioned, when creating CSOs, you can do this interactively by using an edit The following images show editors available in MeVisLab for drawing CSOs: -{{< imagegallery 6 "images/tutorials/dataobjects/contours" "SoCSOPointEditor" "SoCSOAngleEditor" "SoCSOArrowEditor" "SoCSODistanceLineEditor" "SoCSODistancePolylineEditor" "SoCSOEllipseEditor" "SoCSORectangleEditor" "SoCSOSplineEditor" "SoCSOPolygonEditor" "SoCSOIsoEditor" "SoCSOLiveWireEditor">}} +{{< imagegallery 6 + "images/tutorials/dataobjects/contours" + "SoCSOPointEditor" + "SoCSOAngleEditor" + "SoCSOArrowEditor" + "SoCSODistanceLineEditor" + "SoCSODistancePolylineEditor" + "SoCSOEllipseEditor" + "SoCSORectangleEditor" + "SoCSOSplineEditor" + "SoCSOPolygonEditor" + "SoCSOIsoEditor" + "SoCSOLiveWireEditor" +>}} {{}} The `SoCSOIsoEditor` and `SoCSOLiveWireEditor` are special, because they are using an algorithm to detect edges themselves. diff --git a/mevislab.github.io/content/tutorials/dataobjects/contours/contourexample6.md b/mevislab.github.io/content/tutorials/dataobjects/contours/contourexample6.md index 3cded32ca..a51ca55d7 100644 --- a/mevislab.github.io/content/tutorials/dataobjects/contours/contourexample6.md +++ b/mevislab.github.io/content/tutorials/dataobjects/contours/contourexample6.md @@ -70,7 +70,11 @@ In order to see all possible parameters of a CSO, add a `CSOInfo` module to your For labels shown on grayscale images, it makes sense to add a shadow. Open the panel of the `SoCSOVisualizationSettings` module and on tab *Misc* check the option Should render shadow. This increases the readability of your labels. -{{< imagegallery 2 "images/tutorials/dataobjects/contours/" "Ex6_NoShadow" "Ex6_Shadow" >}} +{{< imagegallery 2 + "images/tutorials/dataobjects/contours/" + "Ex6_NoShadow|Labels without shadow" + "Ex6_Shadow|Labels with shadow" +>}} If you want to define your static text as a parameter in multiple labels, you can open the panel of the `CSOLabelRenderer` module and define text as *User Data*. The values can then be used in Python via userData. diff --git a/mevislab.github.io/content/tutorials/dataobjects/curves/curvesexample1.md b/mevislab.github.io/content/tutorials/dataobjects/curves/curvesexample1.md index b803772a8..a9f199827 100644 --- a/mevislab.github.io/content/tutorials/dataobjects/curves/curvesexample1.md +++ b/mevislab.github.io/content/tutorials/dataobjects/curves/curvesexample1.md @@ -77,7 +77,11 @@ Now, update the Curve Table, so that you are using three columns You can see two curves. The second and third columns are printed as separate curves. Both appear yellow. After checking Split columns into data sets, you will see one yellow and one red curve. -{{}} +{{< imagegallery 2 + "/images/tutorials/dataobjects/curves" + "before_split|Without splitting columns" + "after_split|Splitting the columns" +>}} If the flag Split columns into data sets is set to *TRUE*, then a table with more than two columns is split into different *CurveData* objects. This gives the user the possibility to assign a different style and title for each series. diff --git a/mevislab.github.io/content/tutorials/dataobjects/surfaceobjects.md b/mevislab.github.io/content/tutorials/dataobjects/surfaceobjects.md index 3f9b28315..9254ded86 100644 --- a/mevislab.github.io/content/tutorials/dataobjects/surfaceobjects.md +++ b/mevislab.github.io/content/tutorials/dataobjects/surfaceobjects.md @@ -49,12 +49,22 @@ Between the nodes and alongside the edges, faces are created. The rendering of t #### Normals Normals display the orthogonal vector either to the faces (face normals) or to the nodes (nodes normals, which are just the average of adjacent face normals). With the help of the module `SoWEMRendererNormals`, these structures can be visualized. ![Network for rendering normals and nodes of a WEM](images/tutorials/dataobjects/surfaces/WEM_01_6.png "Network for rendering normals and nodes of a WEM") -{{< imagegallery 2 "images/tutorials/dataobjects/surfaces/" "WEMNodeNormals" "WEMFaceNormals">}} + +{{< imagegallery 2 + "images/tutorials/dataobjects/surfaces/" + "WEMNodeNormals" + "WEMFaceNormals" +>}} ### WEMs in MeVisLab {#WEMsInMevislab} In MeVisLab, WEMs can consist of triangles, quadrilaterals, or other polygons. Most common in MeVisLab are surfaces composed of triangles, as shown in the following example. With the help of the module `WEMLoad`, existing WEMs can be loaded into the network. -{{< imagegallery 3 "images/tutorials/dataobjects/surfaces/" "WEMTriangles" "WEMNetwork" "WEMSurface" >}} +{{< imagegallery 3 + "images/tutorials/dataobjects/surfaces/" + "WEMTriangles" + "WEMNetwork" + "WEMSurface" +>}} ## Summary * WEMs are polygon meshes, in most cases composed of triangles. diff --git a/mevislab.github.io/content/tutorials/dataobjects/surfaces/surfaceexample5.md b/mevislab.github.io/content/tutorials/dataobjects/surfaces/surfaceexample5.md index dfa22483f..f680ac2e9 100644 --- a/mevislab.github.io/content/tutorials/dataobjects/surfaces/surfaceexample5.md +++ b/mevislab.github.io/content/tutorials/dataobjects/surfaces/surfaceexample5.md @@ -36,7 +36,11 @@ As a next step, add and connect two modules `WEMSubdivide` to further divide edg The difference when selecting different maximum edge lengths can be seen in the following images. -{{< imagegallery 2 "images/tutorials/dataobjects/surfaces" "EdgeLength1" "EdgeLength01">}} +{{< imagegallery 2 + "images/tutorials/dataobjects/surfaces" + "EdgeLength1|Short edge length" + "EdgeLength01|Even shorter edge length" +>}} #### Distances Between WEMs are Stored in PVLs Now, add the modules `WEMSurfaceDistance` and `WEMInfo` to your workspace and connect them as shown. `WEMSurfaceDistance` calculates the minimum distance between the nodes of both WEM. The distances are stored in the nodes' PVLs as LUT values. diff --git a/mevislab.github.io/content/tutorials/image_processing/image_processing2.md b/mevislab.github.io/content/tutorials/image_processing/image_processing2.md index 0e176d1f2..c5404bf8f 100644 --- a/mevislab.github.io/content/tutorials/image_processing/image_processing2.md +++ b/mevislab.github.io/content/tutorials/image_processing/image_processing2.md @@ -47,7 +47,11 @@ Add a `Mask` and a `Threshold` module to your workspace and connect them as seen Changing the window/level values in your viewer still also changes background voxels. The `Threshold` module still leaves the voxels as is because the threshold value is configured as larger than *0*. Open the panels of the modules `Threshold` and `Mask` via double-click {{< mousebutton "left" >}} and set the values as seen below. -{{< imagegallery 2 "images/tutorials/image_processing" "Threshold" "Mask">}} +{{< imagegallery 2 + "images/tutorials/image_processing" + "Threshold|Threshold panel" + "Mask|Mask panel" +>}} Now, all voxels having a value lower or equal *60* are set to *0*, all others are set to *1*. The resulting image from the `Threshold` module is a binary image that can now be used as a mask by the `Mask` module. diff --git a/mevislab.github.io/content/tutorials/image_processing/image_processing3.md b/mevislab.github.io/content/tutorials/image_processing/image_processing3.md index 7f686b9bb..a017daa28 100644 --- a/mevislab.github.io/content/tutorials/image_processing/image_processing3.md +++ b/mevislab.github.io/content/tutorials/image_processing/image_processing3.md @@ -63,7 +63,11 @@ Scrolling through the slices, you will see that your segmentation is not closed. The difference before and after closing the gaps can be seen in the Output Inspector. -{{< imagegallery 2 "images/tutorials/image_processing" "Output_Before" "Output_After">}} +{{< imagegallery 2 + "images/tutorials/image_processing" + "Output_Before|Original result of the region growing algorithm" + "Output_After|Gaps are closed" +>}} You can play around with the different settings of the `RegionGrowing` and `CloseGap` modules to get a better result. diff --git a/mevislab.github.io/content/tutorials/image_processing/image_processing4.md b/mevislab.github.io/content/tutorials/image_processing/image_processing4.md index c289dc225..31d5eaa93 100644 --- a/mevislab.github.io/content/tutorials/image_processing/image_processing4.md +++ b/mevislab.github.io/content/tutorials/image_processing/image_processing4.md @@ -58,7 +58,11 @@ What happens in your network now? 4) Both `SoWEMRenderer` (the head on the left side and the subtraction on the right side) are inputs for a `SoSwitch`. 5) The `SoSwitch` toggles through its inputs and you can show the original WEM of the head or the subtraction. -{{< imagegallery 2 "images/tutorials/image_processing" "SoExaminerViewer_1" "SoExaminerViewer_2" >}} +{{< imagegallery 2 + "images/tutorials/image_processing" + "SoExaminerViewer_1|Original surface of a head" + "SoExaminerViewer_2|Sphere subtracted from the surface" +>}} You can now toggle the hole to be shown or not, depending on your setting for the `SoSwitch`. diff --git a/mevislab.github.io/content/tutorials/image_processing/image_processing5.md b/mevislab.github.io/content/tutorials/image_processing/image_processing5.md index 4cb2f5049..e53d72dcf 100644 --- a/mevislab.github.io/content/tutorials/image_processing/image_processing5.md +++ b/mevislab.github.io/content/tutorials/image_processing/image_processing5.md @@ -38,7 +38,12 @@ The `Switch` module takes multiple input images and you can toggle between them The `SoRenderArea` now shows the 2D images in a view defined by the `Switch`. -{{< imagegallery 3 "images/tutorials/image_processing" "View0" "View1" "View2" >}} +{{< imagegallery 3 + "images/tutorials/image_processing" + "View0|Sagittal view" + "View1|Coronal view" + "View2|Transversal (axial) view" +>}} ### Current 2D Slice in 3D We now want to visualize the slice visible in the 2D images as a 3D plane. Add a `SoGVRDrawOnPlane` and a `SoExaminerViewer` to your workspace and connect them. We should also add a `SoBackground` and a `SoLUTEditor`. The viewer remains empty because no source image is selected to display. Add a `SoGVRVolumeRenderer` and connect it to your viewer and the `LocalImage`. diff --git a/mevislab.github.io/content/tutorials/image_processing/image_processing6.md b/mevislab.github.io/content/tutorials/image_processing/image_processing6.md index 88d286f30..01b8ad770 100644 --- a/mevislab.github.io/content/tutorials/image_processing/image_processing6.md +++ b/mevislab.github.io/content/tutorials/image_processing/image_processing6.md @@ -60,7 +60,13 @@ In order to see the images, add a `View2D` module and connect it to the `DicomIm The *RTPLAN* and *RTSTRUCT* files do not contain pixel data. Therefore, the `DicomImport` module informs that there is no image data available. The *CT* series contains the original CT data and the *RTDOSE* series contains a mask providing three-dimensional dose data. -{{< imagegallery 4 "images/tutorials/image_processing/" "RTPLAN" "RTSTRUCT" "CT512" "RTDOSE">}} +{{< imagegallery 4 + "images/tutorials/image_processing/" + "RTPLAN" + "RTSTRUCT" + "CT512" + "RTDOSE" +>}} Select the *CT 512×512×272×1* series. diff --git a/mevislab.github.io/content/tutorials/openinventor/camerainteraction.md b/mevislab.github.io/content/tutorials/openinventor/camerainteraction.md index 4f7567706..fc44aa472 100644 --- a/mevislab.github.io/content/tutorials/openinventor/camerainteraction.md +++ b/mevislab.github.io/content/tutorials/openinventor/camerainteraction.md @@ -35,7 +35,11 @@ Add a `SoCameraInteraction` module and connect it between the `SoGroup` and the The `SoCameraInteraction` does not only allow you to change the camera position in your scene but also adds light. The module automatically adds a headlight that you can switch off with a field of the module. -{{< imagegallery 2 "images/tutorials/openinventor" "Headlight_TRUE" "Headlight_FALSE" >}} +{{< imagegallery 2 + "images/tutorials/openinventor" + "Headlight_TRUE|With headlight" + "Headlight_FALSE|Without headlight" +>}} The `SoCameraInteraction` can also be extended by a `SoPerspectiveCamera` or a `SoOrthographicCamera`. Add a `SoSwitch` to your `SoGroup` and connect a `SoPerspectiveCamera` and a `SoOrthographicCamera`. @@ -45,7 +49,11 @@ You can now switch between both cameras, but you cannot interact with them in th Whenever you change the camera in the switch, you need to detect the new camera in the `SoCameraInteraction`. -{{< imagegallery 2 "images/tutorials/openinventor" "SoPerspectiveCamera" "SoOrthographicCamera" >}} +{{< imagegallery 2 + "images/tutorials/openinventor" + "SoPerspectiveCamera" + "SoOrthographicCamera" +>}} A `SoPerspectiveCamera` camera defines a perspective projection from a viewpoint. @@ -76,7 +84,11 @@ The difference to the `SoRenderArea` can be seen immediately. You can interact w The module also allows you to switch between *perspective* and *orthographic* camera by changing the field cameraType. -{{< imagegallery 2 "images/tutorials/openinventor" "SoExaminerViewer_Perspective" "SoExaminerViewer_Orthographic" >}} +{{< imagegallery 2 + "images/tutorials/openinventor" + "SoExaminerViewer_Perspective|Using a perspective camera" + "SoExaminerViewer_Orthographic|Using an orthographic camera" +>}} The module also provides UI elements to interact. diff --git a/mevislab.github.io/content/tutorials/summary/summary1.md b/mevislab.github.io/content/tutorials/summary/summary1.md index 406420a2f..3078b4b8a 100644 --- a/mevislab.github.io/content/tutorials/summary/summary1.md +++ b/mevislab.github.io/content/tutorials/summary/summary1.md @@ -105,7 +105,11 @@ Add a `SoSwitch` module to your network. Connect the switch to both of your `SoW The default input of the switch is *None*. Your 3D viewer remains black. Using the arrows on the `SoSwitch` allows you to toggle between the segmentation and the image. Input *0* shows the segmented brain, input *1* shows the head. You are now able to toggle between them. A view with both objects is still missing. -{{< imagegallery 2 "images/tutorials/summary" "Example1_Segmentation" "Example1_Image" >}} +{{< imagegallery 2 + "images/tutorials/summary" + "Example1_Segmentation|Segmentation of the brain" + "Example1_Image|Segmentation of the skin" +>}} Add a `SoGroup` module and connect both `SoWEMRenderer` modules as input. The output needs to be connected to the right input of the `SoSwitch` module. diff --git a/mevislab.github.io/content/tutorials/summary/summary5.md b/mevislab.github.io/content/tutorials/summary/summary5.md index da797f2ea..43225bbb7 100644 --- a/mevislab.github.io/content/tutorials/summary/summary5.md +++ b/mevislab.github.io/content/tutorials/summary/summary5.md @@ -119,7 +119,11 @@ The last step is to select the target directory for your application. After the installer finished the setup, you will find a desktop icon and a start menu entry for your application. -{{< imagegallery 2 "images/tutorials/summary" "Startmenu" "Desktop" >}} +{{< imagegallery 2 + "images/tutorials/summary" + "Startmenu|Start menu entry" + "Desktop|Desktop icon" +>}} {{}} MeVisLab executables require an additional **MeVisLab Runtime** license. It makes sure that your resulting application needs to be licensed, too. diff --git a/mevislab.github.io/content/tutorials/thirdparty/pytorch/pytorchexample2.md b/mevislab.github.io/content/tutorials/thirdparty/pytorch/pytorchexample2.md index b8ad1d9d6..3150b71c0 100644 --- a/mevislab.github.io/content/tutorials/thirdparty/pytorch/pytorchexample2.md +++ b/mevislab.github.io/content/tutorials/thirdparty/pytorch/pytorchexample2.md @@ -31,7 +31,12 @@ The coordinates in PyTorch are also a little bit different than in MeVisLab; the You can use the Output Inspector to see the changes on the images after applying the resample and a swap or flip. -{{< imagegallery 3 "images/tutorials/thirdparty/" "Original" "Resample3D" "OrthoSwapFlip">}} +{{< imagegallery 3 + "images/tutorials/thirdparty/" + "Original" + "Resample3D" + "OrthoSwapFlip" +>}} Add an `OrthoView2D` module to your network and save the *.mlab* file. diff --git a/mevislab.github.io/content/tutorials/visualization/pathtracer/pathtracerexample1.md b/mevislab.github.io/content/tutorials/visualization/pathtracer/pathtracerexample1.md index 9f3d6dd27..6f75b6008 100644 --- a/mevislab.github.io/content/tutorials/visualization/pathtracer/pathtracerexample1.md +++ b/mevislab.github.io/content/tutorials/visualization/pathtracer/pathtracerexample1.md @@ -45,7 +45,12 @@ Group your modules and name the group *Initialization*. Your network should now Use the Output Inspector for your `SoWEMRenderer` outputs and inspect the 3D rendering. You should have a yellow and a red sphere, and a grey cube. -{{< imagegallery 3 "images/tutorials/visualization/pathtracer" "Sphere1" "Sphere2" "Cube" >}} +{{< imagegallery 3 + "images/tutorials/visualization/pathtracer" + "Sphere1|Yellow sphere" + "Sphere2|Red sphere" + "Cube|Gray cube" +>}} #### Rendering Add two `SoGroup` modules and one `SoBackground` to your network. Connect the modules as seen below. @@ -58,7 +63,12 @@ If you now inspect the output of the `SoGroup`, you will see an orange sphere. You did not translate the locations of the three objects; they are all located at the same place in world coordinates. Open the `WEMInitialize` panels of your 3D objects and define the following translations and scalings: -{{< imagegallery 3 "images/tutorials/visualization/pathtracer" "WEMInitializeSphere1" "WEMInitializeSphere2" "WEMInitializeCube" >}} +{{< imagegallery 3 + "images/tutorials/visualization/pathtracer" + "WEMInitializeSphere1|Initializing the first sphere" + "WEMInitializeSphere2|Initializing the second sphere" + "WEMInitializeCube|Initializing the cube for a flat underground" +>}} The result of the `SoGroup` now shows two spheres on a rectangular cube. @@ -123,7 +133,12 @@ Finally, you want to have the same camera perspective in both viewers, so that y Path tracing requires a lot of iterations before reaching the best possible result. You can see the maximum number of iterations defined and the current iteration in the `SoPathTracer` panel. The more iterations, the better the result but the more time it takes to finalize your image. {{}} -{{< imagegallery 3 "images/tutorials/visualization/pathtracer" "PathTracer_1_Iteration" "PathTracer_100_Iterations" "PathTracer_1000_Iterations" >}} +{{< imagegallery 3 + "images/tutorials/visualization/pathtracer" + "PathTracer_1_Iteration|Path tracer result after one iteration" + "PathTracer_100_Iterations|Path tracer result after hundred iterations" + "PathTracer_1000_Iterations|Path tracer result after thousand iterations" +>}} ## Results Path tracing provides a much more realistic way to visualize the behavior of light in a scene. It simulates the scattering and absorption of light within the volume. diff --git a/mevislab.github.io/content/tutorials/visualization/visualizationexample6.md b/mevislab.github.io/content/tutorials/visualization/visualizationexample6.md index c5640ef27..326010756 100644 --- a/mevislab.github.io/content/tutorials/visualization/visualizationexample6.md +++ b/mevislab.github.io/content/tutorials/visualization/visualizationexample6.md @@ -30,7 +30,14 @@ The **MeVis Path Tracer** offers a Monte Carlo Path Tracing framework running on CUDA is a parallel computing platform and programming model created by NVIDIA. For further information, see [NVIDIA website](https://blogs.nvidia.com/blog/2012/09/10/what-is-cuda-2/). {{}} -{{< imagegallery 5 "images/tutorials/visualization/pathtracer" "PathTracer1" "PathTracer2" "PathTracer3" "PathTracer4" "PathTracer5" >}} +{{< imagegallery 5 + "images/tutorials/visualization/pathtracer" + "PathTracer1|Human heart" + "PathTracer2|Motor block" + "PathTracer3|Liver with lobes and vascular systems" + "PathTracer4|Stag beetle" + "PathTracer5|Colored nerve fibers of the brain" +>}} The `SoPathTracer` module implements the main renderer (like the `SoGVRVolumeRenderer`). It collects all `SoPathTracer*` extensions (on its left side) in the scene and renders them. Picking is also supported, but it supports only the first hit position instead of a full hit profile. It supports an arbitrary number of objects with different orientation and bounding boxes. diff --git a/mevislab.github.io/content/tutorials/visualization/visualizationexample8.md b/mevislab.github.io/content/tutorials/visualization/visualizationexample8.md index 064dafe91..fb4508364 100644 --- a/mevislab.github.io/content/tutorials/visualization/visualizationexample8.md +++ b/mevislab.github.io/content/tutorials/visualization/visualizationexample8.md @@ -95,7 +95,11 @@ We now want the edge ID to be used for coloring each of the skeletons differentl The `SoGVRVolumeRenderer` module also needs a different setting. Open its panel in the *Main* tab, select *Illuminated* as the Render Mode. Adjust the Quality setting to *0.10*. On tab *Advanced*, set Filter Volume Data to *Nearest*. Change to the *Illumination* tab and define below parameters: -{{}} +{{< imagegallery 2 + "images/tutorials/visualization" + "SoGVRVolumeRendererMain|SoGVRVolumeRenderer: main panel" + "SoGVRVolumeRendererIllumination|SoGVRVolumeRenderer: illumination panel" +>}} Change your Python script as follows: {{< highlight >}} diff --git a/mevislab.github.io/formatting.txt b/mevislab.github.io/formatting.txt index 042ff7a39..e38eb725a 100644 --- a/mevislab.github.io/formatting.txt +++ b/mevislab.github.io/formatting.txt @@ -11,7 +11,7 @@ Check: {{}}}} {{< mousebutton "right" >}} {{< mousebutton "middle" >}} -Image Gallery: {{< imagegallery "" ""... }} +Image Gallery: {{< imagegallery "|" "|"... }} MLAB File Download {{< networkfile "" >}} Syntax Highlighting for Code {{< highlight filename="" >}} ```Python diff --git a/mevislab.github.io/themes/MeVisLab/layouts/shortcodes/imagegallery.html b/mevislab.github.io/themes/MeVisLab/layouts/shortcodes/imagegallery.html index af02fdc3e..8f0b28685 100644 --- a/mevislab.github.io/themes/MeVisLab/layouts/shortcodes/imagegallery.html +++ b/mevislab.github.io/themes/MeVisLab/layouts/shortcodes/imagegallery.html @@ -1,15 +1,28 @@
- {{ $columns := .Get 0 }} - {{ $path := .Get 1 }} -
- {{- range (seq 2 (sub (len .Params) 1) ) }} - {{- $myArg := $.Get . }} + {{ $columns := .Get 0 }} + {{ $path := .Get 1 }} + +
+ {{ range $i, $param := .Params }} + {{ if ge $i 2 }} + {{ $parts := split $param "|" }} + {{ $file := index $parts 0 }} + {{ $title := "" }} + {{ if ge (len $parts) 2 }} + {{ $title = index $parts 1 }} + {{ else }} + {{ $title = $file }} + {{ end }} +
-

- {{ $myArg }} -

{{ $myArg }}
-

+

+ + {{ $title }} + +

{{ $title }}
+

- {{- end }} -
+ {{ end }} + {{ end }} +
\ No newline at end of file