SSL Configuration:

Hadoop provides the ability to configure keystore and/or truststore properties. If you want to use SSL-secured file system like swebhdfs, you can use the Hadoop configurations instead of using SSL Context Service.

  1. create 'ssl-client.xml' to configure the truststores.
  2. ssl-client.xml Properties:

    Property Default Value Explanation
    ssl.client.truststore.type jks Truststore file type
    ssl.client.truststore.location NONE Truststore file location
    ssl.client.truststore.password NONE Truststore file password
    ssl.client.truststore.reload.interval 10000 Truststore reload interval, in milliseconds

    ssl-client.xml Example:

    <configuration>
      <property>
        <name>ssl.client.truststore.type</name>
        <value>jks</value>
      </property>
      <property>
        <name>ssl.client.truststore.location</name>
        <value>/path/to/truststore.jks</value>
      </property>
      <property>
        <name>ssl.client.truststore.password</name>
        <value>clientfoo</value>
      </property>
      <property>
        <name>ssl.client.truststore.reload.interval</name>
        <value>10000</value>
      </property>
    </configuration>
                        
  3. put 'ssl-client.xml' to the location looked up in the classpath, like under NiFi conriguration directory.
  4. set the name of 'ssl-client.xml' to hadoop.ssl.client.conf in the 'core-site.xml' which HDFS processors use.
  5. <configuration>
        <property>
          <name>fs.defaultFS</name>
          <value>swebhdfs://{namenode.hostname:port}</value>
        </property>
        <property>
          <name>hadoop.ssl.client.conf</name>
          <value>ssl-client.xml</value>
        </property>
    <configuration>