Deleting a node from an existing configuration, Recovering when a node fails – Google Search Appliance Configuring Distributed Crawling and Serving version 7.2 User Manual

Page 11

Advertising
background image

Google Search Appliance: Configuring Distributed Crawling and Serving

11

13. If Admin NIC is enabled on the shard that you are adding, click Admin NIC enabled on remote

node? and type the IP address of the shard in IP Address.

14. Click Save.

15. Click the GSA

n

Configuration link.

16. Click Apply Configuration. This broadcasts the configuration data to all appliances in the GSA

n

network. Note that document serving will be interrupted briefly on the master node after you click
Apply Configuration.

17. Optionally, click Export and save the distributed crawling configuration file to your local computer.

18. On the admin master node, click Content Sources > Diagnostics > Crawl Status > Resume Crawl.

Deleting a Node from an Existing Configuration

1.

Log in to the Admin Console of the master node.

2.

If the crawl is currently running, click Content Sources > Diagnostics > Crawl Status > Pause
Crawl.

3.

Click Index > Reset Index and click Reset the Index Now.

4.

Log in to each node and reset the index on each node.

5.

On the master node, click GSA

n

> Configuration.

6.

Click the Edit link for the shard configuration that contains the failed node.

7.

Delete the node you want to delete.

8.

Click Save.

9.

Click the GSA

n

Configuration link.

10. Click Apply Configuration. This broadcasts the configuration data to all appliances in the GSA

n

network. Note that document serving will be interrupted briefly on the master node after you click
Apply Configuration.

11. Optionally, click Export and save the distributed crawling configuration file to your local computer.

12. On the admin master node, click Content Sources > Diagnostics > Crawl Status > Pause Crawl

and restart the crawl.

Recovering When a Node Fails

In a distributed crawling and serving configuration, crawling is divided among the different nodes. For
example, if node 1 in a three-node configuration discovers a URL that node 2 should crawl, node 1
forwards the URL to node 2.

When a node in the distributed crawling and serving configuration fails, crawling continues on the
running nodes unless one of the running nodes discovers a URL that the failed node should crawl. At
this point, all crawling stops until the failed node is running again and the link can be forwarded for
crawling.

Advertising