xLM Solutions blog on EXALEAD

Ilan Madjar, Managing Partner/Senior Consultant

This summer, the xLM team was trained on Dassault Systèmes EXALEAD products, with a focus on EXALEAD OnePart and PLM Analytics. Both products fall under the EXALEAD brand and both are based on the same CloudView indexing technology. However, while they share a common technology for indexing, they are different in their target audience. 

This blog touches upon some of the more advanced technical aspects and my reaction to the solution. It reflects only my personal experiences and opinion about EXALEAD.

Find more detailed information about EXALEAD OnePart functionality on the 3DEXPERIENCE website.

OnePart – Reuse and Reduce 

The OnePart solution is made up of two sub solutions – Reuse and Reduce.

  • OnePart Reuse is used to search/index 3D data. It has a lot of technology built upon the base platform to handle the processing of 3D data and adding it to the index. 
  • OnePart Reduce builds upon the index used by OnePart Reuse to assist in advanced data analysis. 

Without going into a lot of details of how the products are licensed, understand that part of the licensing is based on tokens which are consumed when data is indexed. OnePart Reduce takes a separate license token to add a part to the OnePart Reuse analysis. This Reduce token is separate from the document tokens used by the indexing. Once a token is consumed in an analysis, it can’t be recovered, even if the underlying data is deleted. Be aware of which data you are indexing to make best of use of your tokens.   

PLM Analytics

PLM Analytics is closer to the original CloudView. While it doesn’t include the 3D analysis, it is pre-configured with a lot of standard data model settings to provide standard ways to perform business operations. With PLM Analytics, you can create your own unique search apps and dashboards to handle unique enterprise requirements.  

The 3DEXPERIENCE products dealing with issue analytics, change analytics and project analytics are based on the PLM Analytics data model.

Infrastructure

Here is some information about the solution infrastructure and my experience.

Installation: The tool installs as http by default and https is managed from within the administration tool (which is an HTML page). The tool has a place to enter the SSL key. I plan to review the documentation to see which format this key file is supposed to be in, e.g., PEM vs DER. I want to investigate how intermediate CA certs are supposed to work with this interface. 

Memory for various Java processes is managed through XML configuration files.

Configuration: We were told that the CloudView standalone uses the Jetty engine. I have not dug into it. Presumably, the XML configuration files can be used to configure the Jetty engine as well. I need to investigate which engine it uses when running under the platform (Jetty or Tomcat).

Various steps of the indexing pipeline are spun off into their own Java processes.

The connector stage of the pipeline also has a .NET version. We were told the Sharepoint connector uses the .NET connector libraries. Presumably, the .NET runtime under this connector has similar tuning options as the Java version. For future troubleshooting, I would like to know the type of IPC used between the .NET connector and main Java processes. Is the .NET runtime called via JNI or is a separate process kicked off? 

The various components can be distributed (performance) and/or replicated (redundancy/load balancing). How to split this up wasn’t covered in class, but presumably is covered in the documentation. 

EXALEAD has a form to fill out for sizing recommendations. From the class, I can infer what their recommendations will be, but users should take advantage of this service. 

As expected, the indexing and searching can be very IO intensive.  The IOPS of the underlying data store must be taken into consideration when sizing. Presumably, slower IOPS can be mitigated with larger RAM but further understanding is needed if this RAM can be best used for filesystem caching or allocating to the application itself. Given the SSD recommendations, disk IO is presumably random and not sequential access. CloudView/OnePart/PLMA is not going to be simply installed on a typical virtualization farm and expected to perform well. The use of dedicated hardware is probably a more realistic expectation for many implementations.

CloudView/OnePart needs 100 consecutive ports per instance (internal ports, though other ports for connectors may require firewall enablement, but not these ports).

Avoid running antivirus (runtime scan) on the datadir folder.

Using consolidation is optional though useful for creating metas (attributes) and arcs (including “virtual object”) and then to merge data from connectors sources.

The CloudView platform is really a toolkit. It has many points of configuration and/or customization, which shows you how useful this tool can be. Users need to understand that it would be dangerous to implement it without having a real understanding of its technical functions. It can be dangerous if not handled carefully by experts.

There are multiple processes in the overall workflow where you can configure or customize the solution to your needs.

Some examples of points of configuration/customization are:

  • creating your own connectors and PushAPI plugins
  • using Java or Groovy for consolidation
  • configuring meshup (UI)     
  • configuring support widgets developments  
  • using the scan
  • transforming and aggregating around the consolidation
  • working with the feeds (feeds the User interface from the build groups (index))

This gives the tool tremendous flexibility though can be dangerous if not used wisely. Configuration of some linguistic and semantics is also possible, but again meant for the experts only.

Searching/Indexing

Here are my thoughts on the search and index functionality.

Text searching works like it does with most other search engines. Query syntax is similar to that used by Google. It loads data into a data model which it subsequently indexes. 

Without an SDK license, the basic data model is locked (i.e., users can’t add fields/classes). However, OnePart does have an “elastic” data model option that can extend the indexing to additional attributes.  I am looking for more information about how this elastic data model technically expands the basic data model functionality to make troubleshooting easier. 

3D searching is based on a “signature” of the 3D file that is added to the index. OnePart examines and extracts features like holes and surfaces and makes a “signature” for it. The signature is compared to other signatures via the index engine to find similar parts. 

The index is stored in a file system hierarchy, not inside a relational database.   

What are your experiences with EXALEAD?

I wanted to let the community know my initial thoughts. As I explore more of EXALEAD, I will update my findings.

We are excited to be implementing the EXALEAD solution for our customers. Contact us with your questions or comments about EXALEAD.

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments

Contact Us